[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,275)

Search Parameters:
Keywords = software sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1064 KiB  
Article
Actuators with Two Double Gimbal Magnetically Suspended Control Moment Gyros for the Attitude Control of the Satellites
by Romulus Lungu, Alexandru-Nicolae Tudosie, Mihai-Aureliu Lungu and Nicoleta-Claudia Crăciunoiu
Micromachines 2024, 15(9), 1159; https://doi.org/10.3390/mi15091159 (registering DOI) - 16 Sep 2024
Viewed by 214
Abstract
The paper proposes a novel automatic control system for the attitude of the mini-satellites equipped with an actuator having N = 2 DGMSCMGs (Double Gimbal Magnetically Suspended Control Moment Gyros) in parallel and orthogonal configuration, as well as a DGMSCMG-type sensor for the [...] Read more.
The paper proposes a novel automatic control system for the attitude of the mini-satellites equipped with an actuator having N = 2 DGMSCMGs (Double Gimbal Magnetically Suspended Control Moment Gyros) in parallel and orthogonal configuration, as well as a DGMSCMG-type sensor for the measurement of the satellite absolute angular rate. The proportional-derivative controller, designed based on the Lyapunov-functions theory, elaborates the control law according to which the angular rates applied to the servo systems for the actuation of the DGMSCMGs gyroscopic gimbals are computed. The gimbal’s angular rates create gyroscopic couples acting on the satellite in order to control its attitude with respect to the local orbital frame. The new proposed control architecture was software implemented and validated, and the analysis of the obtained results proved the cancellation of the convergence errors and excellent angular rate precision. Full article
25 pages, 10720 KiB  
Article
Fatigue Analysis of Shovel Body Based on Tractor Subsoiling Operation Measured Data
by Bing Zhang, Tiecheng Bai, Gang Wu, Hongwei Wang, Qingzhen Zhu, Guangqiang Zhang, Zhijun Meng and Changkai Wen
Agriculture 2024, 14(9), 1604; https://doi.org/10.3390/agriculture14091604 - 14 Sep 2024
Viewed by 278
Abstract
This paper aims to investigate the effects of soil penetration resistance, tillage depth, and operating speeds on the deformation and fatigue of the subsoiling shovel based on the real-time measurement of tractor-operating conditions data. Various types of sensors, such as force, displacement, and [...] Read more.
This paper aims to investigate the effects of soil penetration resistance, tillage depth, and operating speeds on the deformation and fatigue of the subsoiling shovel based on the real-time measurement of tractor-operating conditions data. Various types of sensors, such as force, displacement, and angle, were integrated. The software and hardware architectures of the monitoring system were designed to develop a field operation condition parameter monitoring system, which can measure the tractor’s traction force of the lower tie-bar, the real-time speed, the latitude and longitude, tillage depth, and the strain of the subsoiling shovel and other condition parameters in real-time. The time domain extrapolation method was used to process the measured data to obtain the load spectrum. The linear damage accumulation theory was used to calculate the load damage of the subsoiling shovel. The magnitude of the damage value was used to characterize the severity of the operation. The signal acquisition test and typical parameter test were conducted for the monitoring system, and the test results showed that the reliability and accuracy of the monitoring system met the requirements. The subsoiling operation test of the system was carried out, which mainly included two kinds of soil penetration resistances (1750 kPa and 2750 kPa), three kinds of tillage depth (250 mm, 300 mm, and 350 mm), and three kinds of operation speed (4 km/h low speed, 6 km/h medium speed, and 8 km/h high speed), totaling 18 kinds of test conditions. Eventually, the effects of changes in working condition parameters of the subsoiling operation on the overall damage of subsoiling shovels and the differences in damage occurring between the front and rear rows of subsoiling shovels under the same test conditions were analyzed. The test results show that under the same soil penetration resistance, the overall damage sustained by the subsoiling shovels increases regardless of the increase in the tillage depth or operating speed. In particular, the increase in the tillage depth increased the severity of subsoiling shovel damage by 19.73%, which was higher than the 17.48% increase due to soil penetration resistance and the 13.07% increase due to the operating speed. It should be noted that the front subsoiling shovels consistently sustained more damage than the rear, and the difference was able to reach 16.86%. This paper may provide useful information for subsoiling operations, i.e., the operational efficiency and the damage level of subsoiling shovels should be considered. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

Figure 1
<p>General scheme of the test system.</p>
Full article ">Figure 2
<p>Sensor installation schematic.</p>
Full article ">Figure 3
<p>Strain measurement points diagram. The numerical labels in the figure indicate the strain gauge test points located at various positions.</p>
Full article ">Figure 4
<p>Tillage depth calibration curve.</p>
Full article ">Figure 5
<p>Test system hardware architecture design. The differently colored lines in the figure represent various connection pathways.</p>
Full article ">Figure 6
<p>Human-machine interaction interface.</p>
Full article ">Figure 7
<p>Schematic diagram explaining the analysis process.</p>
Full article ">Figure 8
<p>Tractor and Subsoiler.</p>
Full article ">Figure 9
<p>Schematic diagram of soil firmness test.</p>
Full article ">Figure 10
<p>Schematic diagram of changes in soil firmness.</p>
Full article ">Figure 11
<p>Schematic diagram of soil moisture content test.</p>
Full article ">Figure 12
<p>Actual site map of field test.</p>
Full article ">Figure 13
<p>Speed test results.</p>
Full article ">Figure 14
<p>Horizontal traction test results.</p>
Full article ">Figure 15
<p>Strain test results.</p>
Full article ">Figure 16
<p>Actual working condition strain data curve of some measurement points of subsoiling shovels: (<b>a</b>) Working Condition 15 measurement point 6 (0°), (<b>b</b>) Working Condition 15 measurement point 6 (45°), (<b>c</b>) Working Condition 15 measurement point 6 (90°).</p>
Full article ">Figure 17
<p>Actual working condition principal stress data curve of some measurement points of subsoiling shovels: (<b>a</b>) Working Condition 15 measurement point 6; (<b>b</b>) Working Condition 4 measurement point 2.</p>
Full article ">Figure 18
<p>Time-domain comparison before and after extrapolation of measurement point 6 for Condition 15 (clayey 8 km/h-300 mm): (<b>a</b>) before extrapolation; (<b>b</b>) after extrapolation.</p>
Full article ">Figure 19
<p>Comparison of rainfall matrices before and after extrapolation for Condition 15 (clayey 8 km/h-300 mm) measurement point 6: (<b>a</b>) before extrapolation; (<b>b</b>) after extrapolation.</p>
Full article ">Figure 20
<p>Severeness evaluation for the shovel body at different soil penetration resistances: (<b>a</b>) Group I (Condition 1, Condition 10); (<b>b</b>) Group II (Condition 9, Condition 18).</p>
Full article ">Figure 21
<p>Severeness evaluation for the shovel body at different tillage depths: (<b>a</b>) Group I (Condition 1, Condition 4, Condition 7); (<b>b</b>) Group II (Condition 12, Condition 15, Condition 18).</p>
Full article ">Figure 22
<p>Severeness evaluation for the shovel body at different operating speeds: (<b>a</b>) Group I (Condition 1, Condition 2, Condition 3); (<b>b</b>) Group II (Condition 16, Condition 17, Condition 18).</p>
Full article ">
21 pages, 7082 KiB  
Article
Dynamic Measurement Method for Steering Wheel Angle of Autonomous Agricultural Vehicles
by Jinyang Li, Zhaozhao Wu, Meiqing Li and Zhijian Shang
Agriculture 2024, 14(9), 1602; https://doi.org/10.3390/agriculture14091602 - 13 Sep 2024
Viewed by 376
Abstract
Steering wheel angle is an important and essential parameter of the navigation control of autonomous wheeled vehicles. At present, the combination of rotary angle sensors and four-link mechanisms is the main sensing approach for steering wheel angle with high measurement accuracy, which is [...] Read more.
Steering wheel angle is an important and essential parameter of the navigation control of autonomous wheeled vehicles. At present, the combination of rotary angle sensors and four-link mechanisms is the main sensing approach for steering wheel angle with high measurement accuracy, which is widely adopted in autonomous agriculture vehicles. However, in a complex and challenging farmland environment, there are a series of prominent problems such as complicated installation and debugging, spattered mud blocking the parallel four-bar mechanism, breakage of the sensor wire during operation, and separate calibrations for different vehicles. To avoid the above problems, a novel dynamic measurement method for steering wheel angle is presented based on vehicle attitude information and a non-contact attitude sensor. First, the working principle of the proposed measurement method and the effect of zero position error on measurement accuracy and path tracking are analyzed. Then, an optimization algorithm for zero position error of steering wheel angle is proposed. The experimental platform is assembled based on a 2ZG-6DM rice transplanter by software design and hardware modification. Finally, comparative tests are conducted to demonstrate the effectiveness and priority of the proposed dynamic sensing method. Experimental results show that the average absolute error of the straight path is 0.057° and the corresponding standard deviation of the error is 0.483°. The average absolute error of the turning path is 0.686° and the standard deviation of the error is 0.931°. This implies the proposed dynamic sensing method can accurately realize the collection of the steering wheel angle. Compared to the traditional measurement method, the proposed dynamic sensing method greatly improves the measurement reliability of the steering wheel angle and avoids complicated installation and debugging of different vehicles. The separate calibrations for different vehicles are not needed since the proposed measurement method is not dependent on the kinematic models of the vehicles. Given that the attitude sensor can be installed at a higher position on the wheel, sensor damage from mud blocking and the sensor wire breaking is also avoided. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

Figure 1
<p>Problems of steering angle measurement. (<b>a</b>) Complex installation; (<b>b</b>) blocked by spattering soil; (<b>c</b>) recalibration when installing on different vehicles; (<b>d</b>) damage of sensor wire.</p>
Full article ">Figure 2
<p>Installation diagram of steering wheel angle sensor. (<b>a</b>) Steering wheel angle sensor. (<b>b</b>) Installation of steering wheel angle sensor.</p>
Full article ">Figure 3
<p>Schematic diagram of non-contact measuring principle.</p>
Full article ">Figure 4
<p>Schematic diagram of four common situations. (<b>a</b>) The wheel and the vehicle body are both to the right relative to the initial direction; (<b>b</b>) the direction of the wheel is to the right relative to the initial direction while the direction of the body is to the left relative to the initial direction; (<b>c</b>) the direction of the wheel is to the left relative to the initial direction while the direction of the body is to the right relative to the initial direction; (<b>d</b>) the wheel and the vehicle body are both to the left relative to the initial direction.</p>
Full article ">Figure 5
<p>Flowchart of angle value resolver.</p>
Full article ">Figure 6
<p>Diagram of deviation between the steering wheel and vehicle body.</p>
Full article ">Figure 7
<p>Effect of deviation ∆δ on the lateral deviation in steady state.</p>
Full article ">Figure 8
<p>Flowchart of zero error optimization method.</p>
Full article ">Figure 9
<p>Experimental platform. (1) Attitude sensor; (2) Navigation controller; (3) Push rod of gear; (4) Navigation base station; (5) Angle sensor; (6) Vehicle-mounted controller; (7) Electric steering wheel; (9) Navigation mobile station antenna; (10) Serial screen; (11) Data transmission module.</p>
Full article ">Figure 10
<p>Installation schematic diagram of angle sensor. (<b>a</b>) Structure diagram; (<b>b</b>) Installation drawing. 1. Steering shaft; 2. Trellis bar; 3. Angle sensor; 4. Fixed bracket; 5. Fixed link; 6. Ball tie rod.</p>
Full article ">Figure 11
<p>Experiment path.</p>
Full article ">Figure 12
<p>Field test photo.</p>
Full article ">Figure 13
<p>Comparison of experiment results. (<b>a</b>) Variation of lateral deviation; (<b>b</b>) angle output.</p>
Full article ">Figure 14
<p>Segmented amplification of <a href="#agriculture-14-01602-f013" class="html-fig">Figure 13</a>. (<b>a</b>) segmented amplification during 0–260 ms; (<b>b</b>) segmented amplification during 261–340 ms; (<b>c</b>) segmented amplification during 341–555 ms; (<b>d</b>) segmented amplification during 556–625 ms; (<b>e</b>) segmented amplification during 626–828 ms; (<b>f</b>) segmented amplification during 829–880 ms; (<b>g</b>) segmented amplification during 881–1000 ms.</p>
Full article ">Figure 14 Cont.
<p>Segmented amplification of <a href="#agriculture-14-01602-f013" class="html-fig">Figure 13</a>. (<b>a</b>) segmented amplification during 0–260 ms; (<b>b</b>) segmented amplification during 261–340 ms; (<b>c</b>) segmented amplification during 341–555 ms; (<b>d</b>) segmented amplification during 556–625 ms; (<b>e</b>) segmented amplification during 626–828 ms; (<b>f</b>) segmented amplification during 829–880 ms; (<b>g</b>) segmented amplification during 881–1000 ms.</p>
Full article ">Figure 14 Cont.
<p>Segmented amplification of <a href="#agriculture-14-01602-f013" class="html-fig">Figure 13</a>. (<b>a</b>) segmented amplification during 0–260 ms; (<b>b</b>) segmented amplification during 261–340 ms; (<b>c</b>) segmented amplification during 341–555 ms; (<b>d</b>) segmented amplification during 556–625 ms; (<b>e</b>) segmented amplification during 626–828 ms; (<b>f</b>) segmented amplification during 829–880 ms; (<b>g</b>) segmented amplification during 881–1000 ms.</p>
Full article ">
24 pages, 17247 KiB  
Article
Efficient Lossy Compression of Video Sequences of Automotive High-Dynamic Range Image Sensors for Advanced Driver-Assistance Systems and Autonomous Vehicles
by Paweł Pawłowski and Karol Piniarski
Electronics 2024, 13(18), 3651; https://doi.org/10.3390/electronics13183651 - 13 Sep 2024
Viewed by 343
Abstract
In this paper, we introduce an efficient lossy coding procedure specifically tailored for handling video sequences of automotive high-dynamic range (HDR) image sensors in advanced driver-assistance systems (ADASs) for autonomous vehicles. Nowadays, mainly for security reasons, lossless compression is used in the automotive [...] Read more.
In this paper, we introduce an efficient lossy coding procedure specifically tailored for handling video sequences of automotive high-dynamic range (HDR) image sensors in advanced driver-assistance systems (ADASs) for autonomous vehicles. Nowadays, mainly for security reasons, lossless compression is used in the automotive industry. However, it offers very low compression rates. To obtain higher compression rates, we suggest using lossy codecs, especially when testing image processing algorithms in software in-the-loop (SiL) or hardware-in-the-loop (HiL) conditions. Our approach leverages the high-quality VP9 codec, operating in two distinct modes: grayscale image compression for automatic image analysis and color (in RGB format) image compression for manual analysis. In both modes, images are acquired from the automotive-specific RCCC (red, clear, clear, clear) image sensor. The codec is designed to achieve a controlled image quality and state-of-the-art compression ratios while maintaining real-time feasibility. In automotive applications, the inherent data loss poses challenges associated with lossy codecs, particularly in rapidly changing scenes with intricate details. To address this, we propose configuring the lossy codecs in variable bitrate (VBR) mode with a constrained quality (CQ) parameter. By adjusting the quantization parameter, users can tailor the codec behavior to their specific application requirements. In this context, a detailed analysis of the quality of lossy compressed images in terms of the structural similarity index metric (SSIM) and the peak signal-to-noise ratio (PSNR) metrics is presented. With this analysis, we extracted some codec parameters, which have an important impact on preservation of video quality and compression ratio. The proposed compression settings are very efficient: the compression ratios vary from 51 to 7765 for grayscale image mode and from 4.51 to 602.6 for RGB image mode, depending on the specified output image quality settings. We reached 129 frames per second (fps) for compression and 315 fps for decompression in grayscale mode and 102 fps for compression and 121 fps for decompression in the RGB mode. These make it possible to achieve a much higher compression ratio compared to lossless compression while maintaining control over image quality. Full article
(This article belongs to the Special Issue Deep Perception in Autonomous Driving)
Show Figures

Figure 1

Figure 1
<p>Most popular color filter arrays (CFAs) in automotive sensors: (<b>a</b>) monochrome (CCCC), (<b>b</b>) RCCC, (<b>c</b>) RCCB, (<b>d</b>) RGCB, (<b>e</b>) RYYc, (<b>f</b>) RGGB (C—clear; R—red; B—blue; G—grey; Y—yellow; c—cyan).</p>
Full article ">Figure 2
<p>The illustrative example of the variability of the output stream (bitrate) and image quality across Q, CQ, CBR, and VBR modes for the VP9 codec [<a href="#B25-electronics-13-03651" class="html-bibr">25</a>].</p>
Full article ">Figure 3
<p>Lossy compression process scheme used for RCCC images with conversion to monochrome or RGB images.</p>
Full article ">Figure 4
<p>PSNR and SSIM within sequence 3 with various GOP size and quality settings (CRF = 15 for high quality and CRF = 55 for reduced quality).</p>
Full article ">Figure 5
<p>One image from sequence 7 presented in RGB color space.</p>
Full article ">
17 pages, 8965 KiB  
Article
Numerical Investigation on the Influence of Turbine Rotor Parameters on the Eddy Current Sensor for the Dynamic Blade Tip Clearance Measurement
by Lingqiang Zhao, Fulin Liu, Yaguo Lyu, Zhenxia Liu and Ziyu Zhao
Sensors 2024, 24(18), 5938; https://doi.org/10.3390/s24185938 - 13 Sep 2024
Viewed by 177
Abstract
Eddy current sensors are increasingly being used to measure the dynamic blade tip clearance in turbines due to their robust anti-interference capabilities and non-contact measurement advantages. However, the current research primarily focuses on enhancing the performance of eddy current sensors themselves, with few [...] Read more.
Eddy current sensors are increasingly being used to measure the dynamic blade tip clearance in turbines due to their robust anti-interference capabilities and non-contact measurement advantages. However, the current research primarily focuses on enhancing the performance of eddy current sensors themselves, with few studies investigating the influence of turbine rotor parameters on the measurements taken by these sensors for dynamic blade tip clearance. Hence, this paper addresses this gap by using COMSOL Multiphysics 6.2 software to establish a finite model with circuit interfaces. Additionally, the model’s validity was verified through experiments. This model is used to simulate the voltage output of the sensor and the measurement of dynamic blade tip clearance under various rotor parameters. The results indicate that the length and number of blades, as well as the hub radius, significantly affect the sensor voltage output in comparison to rotation speed. Furthermore, we show that traditional static calibration methods are inadequate for measuring dynamic blade tip clearance using eddy current sensors. Instead, it is demonstrated that incorporating rotor parameters into the calibration of eddy current sensors can enhance the accuracy of dynamic blade tip clearance measurements. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of measuring principle.</p>
Full article ">Figure 2
<p>Equivalent circuit of measurement.</p>
Full article ">Figure 3
<p>The model for BTC measurement: (<b>a</b>) Practical model for BTC measurement; (<b>b</b>) 2D simplified model.</p>
Full article ">Figure 4
<p>The model geometry.</p>
Full article ">Figure 5
<p>The output signal waveform at a rotation speed of 20,000 r/min: (<b>a</b>) the raw signal; (<b>b</b>) the partially enlarged image of the figure (<b>a</b>).</p>
Full article ">Figure 6
<p>The processed signal.</p>
Full article ">Figure 7
<p>Mesh of the calculation domain: (<b>a</b>) rotating boundary; (<b>b</b>) the rotating air domain.</p>
Full article ">Figure 8
<p>Effect of mesh refinement on the sensor voltage output corresponding to different meshes.</p>
Full article ">Figure 9
<p>(<b>a</b>) Illustration of testing setup; (<b>b</b>) image of simplified tested disc made of aluminum.</p>
Full article ">Figure 10
<p>Comparison of the numerical calculations with the experimental results.</p>
Full article ">Figure 11
<p>Calibration curve.</p>
Full article ">Figure 12
<p>The sensor voltage output with time at different rotational speeds after shifting phase.</p>
Full article ">Figure 13
<p>Measuring clearance and relative measuring error at different rotational speeds.</p>
Full article ">Figure 14
<p>The distribution of magnetic flux density (T) for the same blade at different times, (<b>a</b>) t = 0.004 ms; (<b>b</b>) t = 0.004 ms; (<b>c</b>) t = 0.005 ms; (<b>d</b>) t = 0.006 ms.</p>
Full article ">Figure 15
<p>The sensor voltage output over time for different numbers of blades after shifting phase.</p>
Full article ">Figure 16
<p>Measuring clearance and relative measuring error for different numbers of blades.</p>
Full article ">Figure 17
<p>The distribution of magnetic flux density (T) for different numbers of blades when the blade passes through the sensor, (<b>a</b>) the number of blades is 10; (<b>b</b>) the number of blades is 18.</p>
Full article ">Figure 18
<p>The sensor voltage output over time for different blade lengths.</p>
Full article ">Figure 19
<p>Measuring clearance and relative measuring error for different blade lengths.</p>
Full article ">Figure 20
<p>The distribution of magnetic flux density (T) for different blade lengths at t = 0.65 ms, (<b>a</b>) the blade length is 50 mm; (<b>b</b>) the blade length is 100 mm.</p>
Full article ">Figure 21
<p>The distribution of magnetic flux density on the sensor (T) for different blade lengths at t = 0.7ms, (<b>a</b>) the blade length is 50 mm; (<b>b</b>) the blade length is 100 mm.</p>
Full article ">Figure 22
<p>The distribution of magnetic flux density (T) for different blade lengths at t = 0.75 ms, (<b>a</b>) the blade length is 50 mm; (<b>b</b>) the blade length is 100 mm.</p>
Full article ">Figure 23
<p>The sensor voltage output over time for different radii of disk.</p>
Full article ">Figure 24
<p>Measuring clearance and relative measuring error for different radii of rotor hub.</p>
Full article ">
17 pages, 11753 KiB  
Article
An Industrial Internet-of-Things (IIoT) Open Architecture for Information and Decision Support Systems in Scientific Field Campaigns
by Yehuda Arav, Ziv Klausner, Hadas David-Sarrousi, Gadi Eidelheit and Eyal Fattal
Sensors 2024, 24(18), 5916; https://doi.org/10.3390/s24185916 - 12 Sep 2024
Viewed by 225
Abstract
Information and decision support systems are essential to conducting scientific field campaigns in the atmospheric sciences. However, their development is costly and time-consuming since each field campaign has its own research goals, which result in using a unique set of sensors and various [...] Read more.
Information and decision support systems are essential to conducting scientific field campaigns in the atmospheric sciences. However, their development is costly and time-consuming since each field campaign has its own research goals, which result in using a unique set of sensors and various analysis procedures. To reduce development costs, we present a software framework that is based on the Industrial Internet of Things (IIoT) and an implementation using well-established and newly developed open-source components. This framework architecture and these components allow developers to customize the software to a campaign’s specific needs while keeping the coding to a minimum. The framework’s applicability was tested in two scientific field campaigns that dealt with questions regarding air quality by developing specialized IIoT applications for each one. Each application provided the online monitoring of the acquired data and an intuitive interface for the scientific team to perform the analysis. The framework presented in this study is sufficiently robust and adaptable to meet the diverse requirements of field campaigns. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The location of release points A and B on the concourse floor. The entrances from the street level are marked. (<b>B</b>) The location of NDIR sensors in the concourse at trials A1–A4, and B1–B4. The red cross marks the release point, and the blue circles mark the location of the NDIR.</p>
Full article ">Figure 2
<p>The location of the Kaijo anemometers and the release points in trials A1–A4, and B1–B4. The red cross marks the release point, and the blue circles mark the location of the ultrasonic wind anemometers (KAIJO). Note that, in trials A1–A4, the three Kaijo anemometers were located at the same point but at different heights above ground.</p>
Full article ">Figure 3
<p>The location of the Petri dishes in the outdoor field campaign. A black dot indicates each Petri dish.</p>
Full article ">Figure 4
<p>The ArgosWeb interface allows the user to define the field campaign (experiment). That is, the user defines the trials and their properties, as well as the devices and their properties. The deployment of the devices on the map is also determined with this interface (see <a href="#sensors-24-05916-f005" class="html-fig">Figure 5</a>). See the text for a description of the functionality of each box.</p>
Full article ">Figure 5
<p>Deploying the device in ArgosWeb interface (<b>A</b>). The graphical interface allows the user to select the devices to deploy from the devices that were defined in the campaign (<b>B</b>). The devices are deployed in a point, line, arc, or rectangle (the icons on the lower left side of the screen). The interface allows the user to set the trial-dependent properties.</p>
Full article ">Figure 6
<p>The functional layers of the server domain.</p>
Full article ">Figure 7
<p>A microservice implementation of the IIoT architecture of <a href="#sensors-24-05916-f006" class="html-fig">Figure 6</a> using open-source components.</p>
Full article ">Figure 8
<p>The Node-RED workflow manages real-time data acquisition, parsing and sending the data to a Kafka topic. The parsing procedure is device-specific.</p>
Full article ">Figure 9
<p>The dataflow in the indoor field campaign.</p>
Full article ">Figure 10
<p>The Thingsboard dashboard that shows the status of the NDIR sensors. (<b>Left</b>) The distribution of the NDIR in the concourse and the platform. Green indicates more than 70% messages in the last minute; red indicates less than <math display="inline"><semantics> <mrow> <mn>30</mn> <mo>%</mo> </mrow> </semantics></math>. (<b>Right</b>) The frequency during the last 30 min. Each color indicates a different device.</p>
Full article ">Figure 11
<p>The data flow in the outdoor field campaign.</p>
Full article ">
16 pages, 3585 KiB  
Article
Upper-Limb and Low-Back Load Analysis in Workers Performing an Actual Industrial Use-Case with and without a Dual-Arm Collaborative Robot
by Alessio Silvetti, Tiwana Varrecchia, Giorgia Chini, Sonny Tarbouriech, Benjamin Navarro, Andrea Cherubini, Francesco Draicchio and Alberto Ranavolo
Safety 2024, 10(3), 78; https://doi.org/10.3390/safety10030078 - 11 Sep 2024
Viewed by 219
Abstract
In the Industry 4.0 scenario, human–robot collaboration (HRC) plays a key role in factories to reduce costs, increase production, and help aged and/or sick workers maintain their job. The approaches of the ISO 11228 series commonly used for biomechanical risk assessments cannot be [...] Read more.
In the Industry 4.0 scenario, human–robot collaboration (HRC) plays a key role in factories to reduce costs, increase production, and help aged and/or sick workers maintain their job. The approaches of the ISO 11228 series commonly used for biomechanical risk assessments cannot be applied in Industry 4.0, as they do not involve interactions between workers and HRC technologies. The use of wearable sensor networks and software for biomechanical risk assessments could help us develop a more reliable idea about the effectiveness of collaborative robots (coBots) in reducing the biomechanical load for workers. The aim of the present study was to investigate some biomechanical parameters with the 3D Static Strength Prediction Program (3DSSPP) software v.7.1.3, on workers executing a practical manual material-handling task, by comparing a dual-arm coBot-assisted scenario with a no-coBot scenario. In this study, we calculated the mean and the standard deviation (SD) values from eleven participants for some 3DSSPP parameters. We considered the following parameters: the percentage of maximum voluntary contraction (%MVC), the maximum allowed static exertion time (MaxST), the low-back spine compression forces at the L4/L5 level (L4Ort), and the strength percent capable value (SPC). The advantages of introducing the coBot, according to our statistics, concerned trunk flexion (SPC from 85.8% without coBot to 95.2%; %MVC from 63.5% without coBot to 43.4%; MaxST from 33.9 s without coBot to 86.2 s), left shoulder abdo-adduction (%MVC from 46.1% without coBot to 32.6%; MaxST from 32.7 s without coBot to 65 s), and right shoulder abdo-adduction (%MVC from 43.9% without coBot to 30.0%; MaxST from 37.2 s without coBot to 70.7 s) in Phase 1, and right shoulder humeral rotation (%MVC from 68.4% without coBot to 7.4%; MaxST from 873.0 s without coBot to 125.2 s), right shoulder abdo-adduction (%MVC from 31.0% without coBot to 18.3%; MaxST from 60.3 s without coBot to 183.6 s), and right wrist flexion/extension rotation (%MVC from 50.2% without coBot to 3.0%; MaxST from 58.8 s without coBot to 1200.0 s) in Phase 2. Moreover, Phase 3, which consisted of another manual handling task, would be removed by using a coBot. In summary, using a coBot in this industrial scenario would reduce the biomechanical risk for workers, particularly for the trunk, both shoulders, and the right wrist. Finally, the 3DSSPP software could be an easy, fast, and costless tool for biomechanical risk assessments in an Industry 4.0 scenario where ISO 11228 series cannot be applied; it could be used by occupational medicine physicians and health and safety technicians, and could also help employers to justify a long-term investment. Full article
Show Figures

Figure 1

Figure 1
<p>Some 3DSSPP reconstructions of the three subtasks analyzed: Phase 1 with (<b>a1</b>) and without (<b>b1</b>) the coBot; Phase 2 with (<b>a2</b>) and without (<b>b2</b>) the coBot; and Phase 3 with (<b>a3</b>) and without (<b>b3</b>) the coBot.</p>
Full article ">Figure 2
<p>Mean and SD values for Phase 1, with Bazar (wB) in blue and without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). An asterisk (*) over the bars shows statistical significance.</p>
Full article ">Figure 3
<p>Mean and SD values for Phase 2, with Bazar (wB) in blue and without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). An asterisk (*) over the bars shows statistical significance.</p>
Full article ">Figure 4
<p>Mean and SD values for Phase 3 without Bazar (woB) in red, for the investigated parameters (L4–L5 orthogonal forces, strength percent capable value, %MVC, and maximum holding time). When using the Bazar coBot, this phase would be totally automatized, so we do not have values with the Bazar (wB).</p>
Full article ">
26 pages, 6242 KiB  
Article
Wireless Sensor Node for Chemical Agent Detection
by Zabdiel Brito-Brito, Jesús Salvador Velázquez-González, Fermín Mira, Antonio Román-Villarroel, Xavier Artiga, Satyendra Kumar Mishra, Francisco Vázquez-Gallego, Jung-Mu Kim, Eduardo Fontana, Marcos Tavares de Melo and Ignacio Llamas-Garro
Chemosensors 2024, 12(9), 185; https://doi.org/10.3390/chemosensors12090185 - 11 Sep 2024
Viewed by 308
Abstract
In this manuscript, we present in detail the design and implementation of the hardware and software to produce a standalone wireless sensor node, called SensorQ system, for the detection of a toxic chemical agent. The proposed wireless sensor node prototype is composed of [...] Read more.
In this manuscript, we present in detail the design and implementation of the hardware and software to produce a standalone wireless sensor node, called SensorQ system, for the detection of a toxic chemical agent. The proposed wireless sensor node prototype is composed of a micro-controller unit (MCU), a radio frequency (RF) transceiver, a dual-band antenna, a rechargeable battery, a voltage regulator, and four integrated sensing devices, all of them integrated in a package with final dimensions and weight of 200 × 80 × 60 mm and 0.422 kg, respectively. The proposed SensorQ prototype operates using the Long-Range (LoRa) wireless communication protocol at 2.4 GHz, with a sensor head implemented on a hetero-core fiber optic structure supporting the surface plasmon resonance (SPR) phenomenon with a sensing section (L = 10 mm) coated with titanium/gold/titanium and a chemically sensitive material (zinc oxide) for the detection of Di-Methyl Methyl Phosphonate (DMMP) vapor in the air, a simulant of the toxic nerve agent Sarin. The transmitted spectra with respect to different concentrations of DMMP vapor in the air were recorded, and then the transmitted power for these concentrations was calculated at a wavelength of 750 nm. The experimental results indicate the feasibility of detecting DMMP vapor in air using the proposed optical sensor head, with DMMP concentrations in the air of 10, 150, and 150 ppm in this proof of concept. We expect that the sensor and wireless sensor node presented herein are promising candidates for integration into a wireless sensor network (WSN) for chemical warfare agent (CWA) detection and contaminated site monitoring without exposure of armed forces. Full article
Show Figures

Figure 1

Figure 1
<p>Hardware architecture of the proposed wireless sensor node.</p>
Full article ">Figure 2
<p>Wireless sensor node: (<b>a</b>) 3D model isometric (top/front/left) view, (<b>b</b>) 3D model lateral view, and (<b>c</b>) integrated and packaged wireless sensor node prototype.</p>
Full article ">Figure 3
<p>Architecture of the SensorQ system. Showing deployed wireless sensor nodes at the bottom of the figure connected to the communications gateway mounted on UAVs. The communications gateway makes data available to the end user through the MQTT protocol and 4G/5G wireless communications links.</p>
Full article ">Figure 4
<p>Wireless sensor node electronics: (<b>a</b>) communications side view and (<b>b</b>) sensors side view. A description of each part according to enclosed numbers is provided in <a href="#chemosensors-12-00185-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 5
<p>Wireless sensor node antenna: (<b>a</b>) top view, showing the stacked dual band antenna setup and (<b>b</b>) bottom view showing interconnections and power divider network.</p>
Full article ">Figure 6
<p>Gateway electronics. A description of each part according to enclosed numbers is provided in <a href="#chemosensors-12-00185-t005" class="html-table">Table 5</a>.</p>
Full article ">Figure 7
<p>Graphical representation of the proposed sensor probe supporting the SPR effect with stacked material layers deposited on the SMF: longitudinal optical fiber section (left) and optical fiber cross sections (right).</p>
Full article ">Figure 8
<p>Representation of data collection by the WSN composed of one gateway and three sensor nodes operating under low-power listening mode.</p>
Full article ">Figure 9
<p>Representation of the frame-slotted ALOHA’s (FSA) time organization whilst the gateway is collecting data from each sensor node into a defined sequence of frames (top), slot representation (bottom).</p>
Full article ">Figure 10
<p>Wireless sensor node software architecture based on four inter-related layers (L1–L4): L1 is for the Hardware Abstraction Layer, L2 is for the Real-Time Operating System, L3 is for the drivers to access other devices, and L4 is for the Application Layer.</p>
Full article ">Figure 11
<p>Wireless software architecture of the gateway based on four inter-related layers (L1–L4): L1 is for the interface with different peripherals, L2 is for the Raspbian operating system of the Raspberry Pi, L3 is for the MQT client, a GNSS receiver, and a Lora radio transceiver driver, and L4 is for the parallel running tasks.</p>
Full article ">Figure 12
<p>Screenshot of the configuration dashboard, which allows for the manipulation of several parameters regarding the experiment, the MAC layer, the PHY layer, and the commands sections.</p>
Full article ">Figure 13
<p>Deployment of different data collected in the measurements dashboard (screenshot), such as environmental conditions (gas concentration and temperature), the status (RSSI, acceleration, and battery level), and the location (GPS position and altitude) from two sensor node prototypes.</p>
Full article ">Figure 14
<p>Average data collection time depending on (<b>a</b>) the number of slots per frame for a given number of sensor nodes (each node sends 1 data packet of 22 bytes), (<b>b</b>) the number of sensor nodes for a given number of slots per frame (each node sends 1 data packet of 22 bytes), and (<b>c</b>) the number of slots per frame for a given number of sensor nodes (each node sends 10 data packets of 22 bytes or 1 data packet of 220 bytes). All results are presented for SF-6. (<b>a</b>) Data collection time over number of slots (single packet of 22 bytes), (<b>b</b>) data collection time over number of sensor nodes (single packet of 22 bytes), and (<b>c</b>) data collection time over number of slots (10 packets of 22 bytes or 1 packet of 220 bytes).</p>
Full article ">Figure 15
<p>Sensor head experimental setup based on the optical fiber hetero-core structure coated with Ti/Au/Ti/ZnO.</p>
Full article ">Figure 16
<p>Normalized transmitted intensity for different concentrations of DMMP mixed in the air and interaction with our proposed sensing probe (dots: measured data; dashed line: trend).</p>
Full article ">
19 pages, 7149 KiB  
Article
Continuous High-Precision Positioning in Smartphones by FGO-Based Fusion of GNSS–PPK and PDR
by Amjad Hussain Magsi, Luis Enrique Díez and Stefan Knauth
Micromachines 2024, 15(9), 1141; https://doi.org/10.3390/mi15091141 - 11 Sep 2024
Viewed by 330
Abstract
The availability of raw Global Navigation Satellites System (GNSS) measurements in Android smartphones fosters advancements in high-precision positioning for mass-market devices. However, challenges like inconsistent pseudo-range and carrier phase observations, limited dual-frequency data integrity, and unidentified hardware biases on the receiver side prevent [...] Read more.
The availability of raw Global Navigation Satellites System (GNSS) measurements in Android smartphones fosters advancements in high-precision positioning for mass-market devices. However, challenges like inconsistent pseudo-range and carrier phase observations, limited dual-frequency data integrity, and unidentified hardware biases on the receiver side prevent the ambiguity resolution of smartphone GNSS. Consequently, relying solely on GNSS for high-precision positioning may result in frequent cycle slips in complex conditions such as deep urban canyons, underpasses, forests, and indoor areas due to non-line-of-sight (NLOS) and multipath conditions. Inertial/GNSS fusion is the traditional common solution to tackle these challenges because of their complementary capabilities. For pedestrians and smartphones with low-cost inertial sensors, the usual architecture is Pedestrian Dead Reckoning (PDR)+ GNSS. In addition to this, different GNSS processing techniques like Precise Point Positioning (PPP) and Real-Time Kinematic (RTK) have also been integrated with INS. However, integration with PDR has been limited and only with Kalman Filter (KF) and its variants being the main fusion techniques. Recently, Factor Graph Optimization (FGO) has started to be used as a fusion technique due to its superior accuracy. To the best of our knowledge, on the one hand, no work has tested the fusion of GNSS Post-Processed Kinematics (PPK) and PDR on smartphones. And, on the other hand, the works that have evaluated the fusion of GNSS and PDR employing FGO have always performed it using the GNSS Single-Point Positioning (SPP) technique. Therefore, this work aims to combine the use of the GNSS PPK technique and the FGO fusion technique to evaluate the improvement in accuracy that can be obtained on a smartphone compared with the usual GNSS SPP and KF fusion strategies. We improved the Google Pixel 4 smartphone GNSS using Post-Processed Kinematics (PPK) with the open-source RTKLIB 2.4.3 software, then fused it with PDR via KF and FGO for comparison in offline mode. Our findings indicate that FGO-based PDR+GNSS–PPK improves accuracy by 22.5% compared with FGO-based PDR+GNSS–SPP, which shows smartphones obtain high-precision positioning with the implementation of GNSS–PPK via FGO. Full article
Show Figures

Figure 1

Figure 1
<p>Inertial+GNSS fusion architectures.</p>
Full article ">Figure 2
<p>PDR mechanism.</p>
Full article ">Figure 3
<p>Factor graph for PDR+GNSS–SPP fusion architecture.</p>
Full article ">Figure 4
<p>Data collection setup.</p>
Full article ">Figure 5
<p>Difference between the ground truth, SPP-GNSS, and PPK-GNSS data.</p>
Full article ">Figure 6
<p>RTKLIB settings.</p>
Full article ">Figure 7
<p>Two-dimensional horizontal positioning errors in PDR+GNSS–SPP fusion architecture.</p>
Full article ">Figure 8
<p>Combined overall CDF from both fusion architectures using FGO and KF.</p>
Full article ">Figure 9
<p>Comparison of trajectories in PDR+GNSS–SPP fusion architectures.</p>
Full article ">Figure 10
<p>Two-dimensional horizontal positioning errors in PDR+GNSS–PPK fusion architecture.</p>
Full article ">Figure 11
<p>Comparison of trajectories in PDR+GNSS–PPK fusion architecture.</p>
Full article ">Figure 12
<p>Computational time consumed by FGO-based PDR+GNSS–PPK fusion architecture.</p>
Full article ">Figure 13
<p>Cumulative computational time consumed by FGO-based PDR+GNSS–PPK fusion.</p>
Full article ">Figure 14
<p>Google Pixel 4 raw GNSS observartions.</p>
Full article ">
33 pages, 26346 KiB  
Article
Horizontal Test Stand for Bone Screw Insertion
by Jack Wilkie, Georg Rauter and Knut Möller
Hardware 2024, 2(3), 223-255; https://doi.org/10.3390/hardware2030011 - 9 Sep 2024
Viewed by 237
Abstract
Screws are a versatile method of fixation and are often used in orthopaedic surgery. Various specialised geometries are often used for bone screws to optimise their fixation strengths in limited spaces at the expense of manufacturing costs. Additionally, ongoing research is looking to [...] Read more.
Screws are a versatile method of fixation and are often used in orthopaedic surgery. Various specialised geometries are often used for bone screws to optimise their fixation strengths in limited spaces at the expense of manufacturing costs. Additionally, ongoing research is looking to develop systems/models to automatically optimise bone screw tightening torques. For both applications, it is desirable to have a test rig for inserting screws in a regulated, instrumented, and repeatable manner. This work presents such a test rig primarily used for the validation of optimal torque models; however, other applications like the above are easily foreseeable. Key features include controllable insertion velocity profiles, and a high rate measurement of screw torque, angular displacement, and linear displacement. The test rig is constructed from mostly inexpensive components, with the primary costs being the rotational torque sensor (approx. 2000 €), and the remainder being approximately 1000 €. This is in comparison to a biaxial universal testing machine which may exceed 100,000 €. Additionally, the firmware and interface software are designed to be easily extendable. The angular velocity profiling and linear measurement repeatability of the test rig is tested and the torque readings are compared to an off-the-shelf static torque sensor. Full article
Show Figures

Figure 1

Figure 1
<p>Complete test rig design with labeled parts referred to below. Length excluding the counterweight (H) is 700 mm, width is approximately 300 mm including clearance for the draw-wire encoder (E) wire, and height is 235 mm with extra clearance required for the torque sensor (B) plug totaling approximately 300 mm unless a right-angle connector is used which would remain within 235 mm.</p>
Full article ">Figure 2
<p>Test rig controller with labeled components referred to below.</p>
Full article ">Figure 3
<p>Diagram of the designs of the devices, tasks, data, and their interactions in the firmware. This is simplified to show the most important relationships.</p>
Full article ">Figure 4
<p>Coupled rotational components.</p>
Full article ">Figure 5
<p>Assembled sliding platform. (<b>a</b>) Bottom view. Don’t overlook indicated part. (<b>b</b>) Top view.</p>
Full article ">Figure 6
<p>Assembled base of test stand.</p>
Full article ">Figure 7
<p>Base with sliding platform mounted on test stand.</p>
Full article ">Figure 8
<p>Visualisation of steps for assembling the sample mount on the test stand. (<b>a</b>) Base mounted. (<b>b</b>) Lower plate added. (<b>c</b>) Threaded rods added. (<b>d</b>) Upper plate added.</p>
Full article ">Figure 9
<p>(<b>a</b>) Pulley block assembly. (<b>b</b>) Block mounted on test stand.</p>
Full article ">Figure 10
<p>Steps for assembling counterweight wire. (<b>a</b>) Wire tied around nut to secure in siding platform wire holder. (<b>b</b>) Wire threaded through hole drilled in bolt. (<b>c</b>) Threaded nut screwed into counterweight.</p>
Full article ">Figure 11
<p>Steps for fitting draw-wire holding block. (<b>a</b>) Assembled draw-wire holding block. (<b>b</b>) Holding block mounted to test stand.</p>
Full article ">Figure 12
<p>Front and back cut-outs in controller box. Indicated hole is not part of final design.</p>
Full article ">Figure 13
<p>Front panel connectors/wires. Indicated connectors are not part of the final design. (<b>a</b>) Connectors fitted to front panel with wires soldered. (<b>b</b>) Front view of front panel connectors. (<b>c</b>) Motor connection cables fitted through cable grommets.</p>
Full article ">Figure 14
<p>Power supply connections assembly. (<b>a</b>) Wires attached to plug. (<b>b</b>) Plug attached to rear panel and PSU terminals.</p>
Full article ">Figure 15
<p>Power supply and motor driver mounting. (<b>a</b>) Attached case sides and base to power supply. (<b>b</b>) Mounted motor driver onto side of power supply.</p>
Full article ">Figure 16
<p>Wire connections for the motor driver. The Tiva ground for power is connected via the PUL- port to minimise loop size for this rapidly switching signal.</p>
Full article ">Figure 17
<p>Tiva mounting. (<b>a</b>) Back-stop added. (<b>b</b>) Back-stop secured with screws and Tiva inserted. (<b>c</b>) First clip fitted and second half-on to demonstrate process. (<b>d</b>) Tiva fully secured.</p>
Full article ">Figure 18
<p>Pin connection positions on the Tiva board. Note, power source selection switch position.</p>
Full article ">Figure 19
<p>Pin numbering used on 8-pin circular connectors.</p>
Full article ">Figure 20
<p>Inline voltage divider for Torque sensor output to Tiva input. Shaded areas show where to cover connections with electrical tape. Silver ovals show soldering locations.</p>
Full article ">Figure 21
<p>Fully wired controller box.</p>
Full article ">Figure 22
<p>Interactions with main window to add, and connect to, the test rig.</p>
Full article ">Figure 23
<p>Diagnostics window with normal display.</p>
Full article ">Figure 24
<p>Bone screw test rig settings/control window.</p>
Full article ">Figure 25
<p>Experimental setup with off-the-shelf torque sensor connected to test rig shaft.</p>
Full article ">Figure 26
<p>Visual clarification of how changing the spacing in the coupler changes the amount of magnetic field coupling into the torque sensor shaft. The magnetic permeability of the air and aluminium is very low and similar in comparison to that of the steel screw-bit holder and torque sensor shaft.</p>
Full article ">Figure 27
<p>Error in torque measurement over tested range with different magnetic coupling.</p>
Full article ">Figure 28
<p>Error from perfect profile under different loading conditions.</p>
Full article ">Figure A1
<p>Main window with labeled controls.</p>
Full article ">Figure A2
<p>Settings window with labeled controls.</p>
Full article ">Figure A3
<p>Diagnostics window with labeled controls.</p>
Full article ">Figure A4
<p>Basic demo window with labeled controls.</p>
Full article ">Figure A5
<p>Fancy demo window with labeled controls.</p>
Full article ">
24 pages, 3132 KiB  
Article
Comparing Large-Eddy Simulation and Gaussian Plume Model to Sensor Measurements of an Urban Smoke Plume
by Dominic Clements, Matthew Coburn, Simon J. Cox, Florentin M. J. Bulot, Zheng-Tong Xie and Christina Vanderwel
Atmosphere 2024, 15(9), 1089; https://doi.org/10.3390/atmos15091089 - 7 Sep 2024
Viewed by 494
Abstract
The fast prediction of the extent and impact of accidental air pollution releases is important to enable a quick and informed response, especially in cities. Despite this importance, only a small number of case studies are available studying the dispersion of air pollutants [...] Read more.
The fast prediction of the extent and impact of accidental air pollution releases is important to enable a quick and informed response, especially in cities. Despite this importance, only a small number of case studies are available studying the dispersion of air pollutants from fires in a short distance (O(1 km)) in urban areas. While monitoring pollution levels in Southampton, UK, using low-cost sensors, a fire broke out from an outbuilding containing roughly 3000 reels of highly flammable cine nitrate film and movie equipment, which resulted in high values of PM2.5 being measured by the sensors approximately 1500 m downstream of the fire site. This provided a unique opportunity to evaluate urban air pollution dispersion models using observed data for PM2.5 and the meteorological conditions. Two numerical approaches were used to simulate the plume from the transient fire: a high-fidelity computational fluid dynamics model with large-eddy simulation (LES) embedded in the open-source package OpenFOAM, and a lower-fidelity Gaussian plume model implemented in a commercial software package: the Atmospheric Dispersion Modeling System (ADMS). Both numerical models were able to quantitatively reproduce consistent spatial and temporal profiles of the PM2.5 concentration at approximately 1500 m downstream of the fire site. Considering the unavoidable large uncertainties, a comparison between the sensor measurements and the numerical predictions was carried out, leading to an approximate estimation of the emission rate, temperature, and the start and duration of the fire. The estimation of the fire start time was consistent with the local authority report. The LES data showed that the fire lasted for at least 80 min at an emission rate of 50 g/s of PM2.5. The emission was significantly greater than a ‘normal’ house fire reported in the literature, suggesting the crucial importance of the emission estimation and monitoring of PM2.5 concentration in such incidents. Finally, we discuss the advantages and limitations of the two numerical approaches, aiming to suggest the selection of fast-response numerical models at various compromised levels of accuracy, efficiency and cost. Full article
(This article belongs to the Special Issue Advances in Urban Air Pollution Observation and Simulation)
Show Figures

Figure 1

Figure 1
<p>Location of low-cost sensors (black markers) across Southampton along with fire location (red marker). The smoke plume from the fire was captured primarily by the sensor located near St Mary’s, southeast of the source of the fire.</p>
Full article ">Figure 2
<p>Time series of wind speed and wind gust speed. Time = 0 denotes fire start time, corresponding to local time 17:30.</p>
Full article ">Figure 3
<p>Same as in <a href="#atmosphere-15-01089-f002" class="html-fig">Figure 2</a>, but for wind direction.</p>
Full article ">Figure 4
<p>Same as in <a href="#atmosphere-15-01089-f002" class="html-fig">Figure 2</a>, but for air temperature.</p>
Full article ">Figure 5
<p>Comparison of PMS and SPS sensor data showing consistency in monitored concentrations of PM<sub>2.5</sub>.</p>
Full article ">Figure 6
<p>CFD domain with a size of 3450 m (streamwise, <span class="html-italic">x</span> coordinate) × 2200 m (spanwise, <span class="html-italic">y</span> coordinate) × 1000 m (vertical, <span class="html-italic">z</span> coordinate). The streamwise direction is from the 310° wind direction (north-westerly wind).</p>
Full article ">Figure 7
<p>Mesh visualized on a plane within the city domain for zoomed-in near the St. Mary’s Football Stadium at ground level (<b>left</b>) and overview of mesh on the ground and far-field (<b>right</b>).</p>
Full article ">Figure 8
<p>CFD simulated instantaneous streamwise velocity (m/s).</p>
Full article ">Figure 9
<p>Sensor data downstream of fire with respect to the estimated fire start time (<b>a</b>) and zoomed-in for the duration of fire (<b>b</b>). The left side scale presents the measurements in units of μg/m<sup>3</sup>, whereas the right side scale presents the measurements non-dimensionalized based on an estimated emission rate <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> g/s, the reference velocity <math display="inline"><semantics> <msub> <mi>U</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> </mrow> </msub> </semantics></math>, and the average building height <span class="html-italic">h</span> to aid the comparison in <a href="#sec5dot3-atmosphere-15-01089" class="html-sec">Section 5.3</a>.</p>
Full article ">Figure 10
<p>Contours of ground level PM<sub>2.5</sub> concentration at <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> h for a source temperature 500° with an emission rate 50 g/s, calculated from ADMS and presented over a wide domain in (<b>a</b>) and zoomed in close to the sensor in (<b>b</b>).</p>
Full article ">Figure 11
<p>Dimensionless emission rates for various source release periods.</p>
Full article ">Figure 12
<p>Dimensionless concentration at the sensor location from four CFD releases, compared with the sensor data. These are based on the estimated emission rate <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> g/s, the freestream velocity <math display="inline"><semantics> <msub> <mi>U</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> </mrow> </msub> </semantics></math>, and the averaged building height <span class="html-italic">h</span>.</p>
Full article ">Figure 13
<p>Instantaneous concentration <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>/</mo> <mi>Q</mi> </mrow> </semantics></math> contours on ground level at 1.5 (<b>top</b>) and 3 (<b>bottom</b>) hours time after the fire started, provided by LES in OpenFOAM with an 80-min release. The streamwise direction was the incoming wind direction from left to right.</p>
Full article ">Figure 14
<p>Dimensionless concentration (<math display="inline"><semantics> <mrow> <mover accent="true"> <mi>c</mi> <mo stretchy="false">^</mo> </mover> <mo>=</mo> <mi>C</mi> <mo>∗</mo> <msub> <mi>U</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> </mrow> </msub> <mo>∗</mo> <msup> <mi>h</mi> <mn>2</mn> </msup> <mo>/</mo> <mi>Q</mi> </mrow> </semantics></math>) time series. Sensor measurements normalized by three guessed emission rates: 5 g/s, 50 g/s and 150 g/s. Three fire temperatures tested in ADMS: 100 °C, 500 °C and 1000 °C. Two sample locations tested in OpenFOAM (OF): sensor site, and plume center.</p>
Full article ">Figure 15
<p>A comparison of the mean concentrations obtained from all three methods (sensors, CFD and ADMS) along with error bars representing the total uncertainty associated with each method and their associated assumptions.</p>
Full article ">Figure 16
<p>Concentration time series from sensor measurements, CFD and ADMS predictions, based on an estimated emission rate of 50 g/s (see <a href="#atmosphere-15-01089-f014" class="html-fig">Figure 14</a>). Different fire temperatures 100 °C, 500 °C and 1000 °C were tested for the ADMS data.</p>
Full article ">
19 pages, 342 KiB  
Article
Overview of Embedded Rust Operating Systems and Frameworks
by Thibaut Vandervelden, Ruben De Smet, Diana Deac, Kris Steenhaut and An Braeken
Sensors 2024, 24(17), 5818; https://doi.org/10.3390/s24175818 - 7 Sep 2024
Viewed by 395
Abstract
Embedded Operating Systems (OSs) are often developed in the C programming language. Developers justify this choice by the performance that can be achieved, the low memory footprint, and the ease of mapping hardware to software, as well as the strong adoption by industry [...] Read more.
Embedded Operating Systems (OSs) are often developed in the C programming language. Developers justify this choice by the performance that can be achieved, the low memory footprint, and the ease of mapping hardware to software, as well as the strong adoption by industry of this programming language. The downside is that C is prone to security vulnerabilities unknowingly introduced by the software developer. Examples of such vulnerabilities are use-after-free, and buffer overflows. Like C, Rust is a compiled programming language that guarantees memory safety at compile time by adhering to a set of rules. There already exist a few OSs and frameworks that are entirely written in Rust, targeting sensor nodes. In this work, we give an overview of these OSs and frameworks and compare them on the basis of the features they provide, such as application isolation, scheduling, inter-process communication, and networking. Furthermore, we compare the OSs on the basis of the performance they provide, such as cycles and memory usage. Full article
Show Figures

Figure 1

Figure 1
<p>Asynchronous model in Rust with an executor and 2 futures. The executor polls the futures, which are executed until they are ready. A future returns <tt>Poll::Pending</tt> when it needs to wait for a resource, and <tt>Poll::Ready</tt> when it is ready. The executor is notified when a future is ready to be polled again.</p>
Full article ">Figure 2
<p>Example of Stack Resource Policy-based scheduling in RTIC with 3 tasks, with Task 1 having the lowest priority and Task 3 the highest. Task 1 and Task 2 share the same resource. When Task 1 is scheduled, it raises the running priority to medium, which is the highest priority of the resources it is using (shown in step 1). Task 2 is scheduled, but cannot preempt Task 1 as it requires the same resource (shown in step 2), preventing a deadlock where Task 2 would wait indefinitely for the occupied resource. Task 3 is scheduled, preempting Task 1 (shown in step 3) as it has a higher priority. When Task 3 is finished, the execution of Task 1 is resumed. Task 1 releases the resource and lowers the priority to its original level (shown in step 4). Task 2 can now preempt Task 1.</p>
Full article ">Figure 3
<p><span class="html-italic">Interrupt latency</span> and <span class="html-italic">scheduling latency</span> measurement setup. The interrupt latency is the time between the interrupt being triggered and the start of the ISR. The interrupt duration is the time it takes to handle the ISR. The scheduling latency is the time it takes to schedule a task. The microcontroller should be in an idle state such that the interrupt does not preempt a running task. Otherwise, the scheduling latency is not measured correctly, as after the interrupt handling, another task would resume its execution before task scheduling occurs.</p>
Full article ">Figure 4
<p>Total latency measurements for different number of tasks for Embassy, RTIC, and Tock. For Tock with the round-robin and cooperative scheduler, the total latency increases non-linearly with the number of tasks. All other schedulers show a linear increase in total latency.</p>
Full article ">
21 pages, 8887 KiB  
Article
Real-Time Performance Measurement Application via Bluetooth Signals for Signalized Intersections
by Fuat Yalçınlı, Bayram Akdemir and Akif Durdu
Appl. Sci. 2024, 14(17), 7849; https://doi.org/10.3390/app14177849 - 4 Sep 2024
Viewed by 356
Abstract
Improving the performance at signalized intersections can be achieved through different management styles or sensor technologies. It is crucial that we measure the real-time impact of these variables on intersection performance. This study introduces a Bluetooth-based real-time performance measurement system applicable to all [...] Read more.
Improving the performance at signalized intersections can be achieved through different management styles or sensor technologies. It is crucial that we measure the real-time impact of these variables on intersection performance. This study introduces a Bluetooth-based real-time performance measurement system applicable to all signalized intersections. Additionally, the developed method serves as a feedback tool for adaptive intersection management systems, providing valuable data input for performance optimization. The method developed in the study is applied at the Refik Cesur Intersection in the Polatlı district of Ankara where delay values are calculated based on traffic flows and data from Bluetooth sensors positioned at strategic locations. Initially, the intersection operated under a fixed-time signaling system, followed by a fully adaptive signaling system the next day. The performance of these two systems is compared using the Bluetooth-based application. The results show that the average delay per vehicle per day is 58.1 seconds/vehicle for the fixed-time system and 45.3 seconds/vehicle for the adaptive system. To validate the Bluetooth-based performance measurement system, the intersection is modeled and simulated using Aimsun Simulation Software Next 20.0.4. The simulation results confirm the findings of the Bluetooth-based analysis, demonstrating the effectiveness of the adaptive signaling system in reducing delays. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

Figure 1
<p>Refik Cesur Intersection satellite image.</p>
Full article ">Figure 2
<p>Refik Cesur Intersection directions and traffic flows.</p>
Full article ">Figure 3
<p>Fully adaptive signaling system image-processing-based sensor.</p>
Full article ">Figure 4
<p>Bluetooth-based traffic analysis detector structure.</p>
Full article ">Figure 5
<p>Application on sample intersection.</p>
Full article ">Figure 6
<p>Bluetooth traffic analysis sensor layouts.</p>
Full article ">Figure 7
<p>Bluetooth traffic analysis sensor field installation.</p>
Full article ">Figure 8
<p>Traffic flow hourly delay: (<b>a</b>) Q5 traffic flow hourly delay comparison; and (<b>b</b>) Q6 traffic flow hourly delay comparison.</p>
Full article ">Figure 9
<p>Traffic flow hourly delay: (<b>a</b>) Q7 traffic flow hourly delay comparison; and (<b>b</b>) Q8 traffic flow hourly delay comparison.</p>
Full article ">Figure 10
<p>Traffic flow hourly delay: (<b>a</b>) Q9 traffic flow hourly delay comparison; and (<b>b</b>) Q10 traffic flow hourly delay comparison.</p>
Full article ">Figure 11
<p>Refik Cesur Intersection Aimsun simulation modeling.</p>
Full article ">Figure 12
<p>Signalized intersection management feedback control structure.</p>
Full article ">
32 pages, 7523 KiB  
Article
Methods and Software Tools for Reliable Operation of Flying LiFi Networks in Destruction Conditions
by Herman Fesenko, Oleg Illiashenko, Vyacheslav Kharchenko, Kyrylo Leichenko, Anatoliy Sachenko and Lukasz Scislo
Sensors 2024, 24(17), 5707; https://doi.org/10.3390/s24175707 - 2 Sep 2024
Viewed by 383
Abstract
The analysis of utilising unmanned aerial vehicles (UAVs) to form flying networks in obstacle conditions and various algorithms for obstacle avoidance is conducted. A planning scheme for deploying a flying LiFi network based on UAVs in a production facility with obstacles is developed [...] Read more.
The analysis of utilising unmanned aerial vehicles (UAVs) to form flying networks in obstacle conditions and various algorithms for obstacle avoidance is conducted. A planning scheme for deploying a flying LiFi network based on UAVs in a production facility with obstacles is developed and described. Such networks are necessary to ensure reliable data transmission from sensors or other sources of information located in dangerous or hard-to-reach places to the crisis centre. Based on the planning scheme, the following stages are described: (1) laying the LiFi signal propagation route in conditions of interference, (2) placement of the UAV at the specified points of the laid route for the deployment of the LiFi network, and (3) ensuring the reliability of the deployed LiFi network. Strategies for deploying UAVs from a stationary depot to form a flying LiFi network in a room with obstacles are considered, namely the strategy of the first point for the route, the strategy of radial movement, and the strategy of the middle point for the route. Methods for ensuring the uninterrupted functioning of the flying LiFi network with the required level of reliability within a given time are developed and discussed. To implement the planning stages for deploying the UAV flying LiFi network in a production facility with obstacles, the “Simulation Way” and “Reliability Level” software tools are developed and described. Examples of utilising the proposed software tools are given. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram for deploying a UAV-based LiFi network with obstacles in a production facility.</p>
Full article ">Figure 2
<p>Strategy of the first point of the route representation.</p>
Full article ">Figure 3
<p>Radial movement strategy representation.</p>
Full article ">Figure 4
<p>Midpoint strategy representation.</p>
Full article ">Figure 5
<p>The graph shows the dependence of the flying LiFi network’s deployment time on the UAV’s speed for the strategies of the first point of the route, radial movement, and the route’s midpoint.</p>
Full article ">Figure 6
<p>The flying LiFi network is deployed using the following strategies: (<b>a</b>) the first point of the route; (<b>b</b>) radial movement; (<b>c</b>) midpoint.</p>
Full article ">Figure 7
<p>A one-shift work cycle is used to deploy and ensure LiFi functionality.</p>
Full article ">Figure 8
<p>Alternate work of two shifts to deploy and ensure the functioning of the flying LiFi network.</p>
Full article ">Figure 9
<p>Graph of the dependence of the probability of fault-free operation of the (<b>a</b>) first shift in time; (<b>b</b>) second shift in time.</p>
Full article ">Figure 10
<p>A general graph of the dependence of the probability of PFFO on time for both shifts.</p>
Full article ">Figure 11
<p>Architecture of the “Simulation Way” software tool.</p>
Full article ">Figure 12
<p>Use-case diagram for the simulation in the “Simulation Way”.</p>
Full article ">Figure 13
<p>Sequence diagram of the interaction of CC operator with “Simulation Way”.</p>
Full article ">Figure 14
<p>A modular diagram that demonstrates the logic of interaction between the components of the “Simulation Way” software tool.</p>
Full article ">Figure 15
<p>View of the Control panel with functional zones: 1—process launch zone; 2—zone of implementation of the method (rules) of avoiding obstacles; 3—zone of the graphic display of layers; 4—modelling parameters setting area.</p>
Full article ">Figure 16
<p>An example of displaying the working area of a production facility with obstacles in 2D space.</p>
Full article ">Figure 17
<p>“Simulation Way” tool results for the right-angle algorithm to bypass obstacles: (<b>a</b>) control panel with parameters for laying a LiFi route; (<b>b</b>) image of a LiFi route.</p>
Full article ">Figure 18
<p>“Simulation Way” tool results for the left-angle algorithm to bypass obstacles: (<b>a</b>) control panel with parameters for laying a LiFi route; (<b>b</b>) image of a LiFi route.</p>
Full article ">Figure 19
<p>“Simulation Way” tool results for the controlled waterfall algorithm to bypass obstacles: (<b>a</b>) control panel with parameters for laying a LiFi route; (<b>b</b>) image of a LiFi route.</p>
Full article ">Figure 20
<p>“Simulation Way” results: UAV movement routes from the depot to the placement points (places) on the laid LiFi route using (<b>a</b>) the first point of the route strategy, (<b>b</b>) the radial movement strategy, and (<b>c</b>) the midpoint strategy.</p>
Full article ">Figure 21
<p>Graphs of the dependence of the probability of fault-free operation: (<b>a</b>) PFFO and the first shift on time; (<b>b</b>) PFFO and the second shift on time; (<b>c</b>) a general graph of the probability of PFFO versus time for both changes.</p>
Full article ">
20 pages, 3727 KiB  
Article
Two-Dimensional Mammography Imaging Techniques for Screening Women with Silicone Breast Implants: A Pilot Phantom Study
by Isabelle Fitton, Virginia Tsapaki, Jonathan Zerbib, Antoine Decoux, Amit Kumar, Aude Stembert, Françoise Malchair, Claire Van Ngoc Ty and Laure Fournier
Bioengineering 2024, 11(9), 884; https://doi.org/10.3390/bioengineering11090884 - 31 Aug 2024
Viewed by 513
Abstract
This study aimed to evaluate the impact of three two-dimensional (2D) mammographic acquisition techniques on image quality and radiation dose in the presence of silicone breast implants (BIs). Then, we propose and validate a new International Atomic Energy Agency (IAEA) phantom to reproduce [...] Read more.
This study aimed to evaluate the impact of three two-dimensional (2D) mammographic acquisition techniques on image quality and radiation dose in the presence of silicone breast implants (BIs). Then, we propose and validate a new International Atomic Energy Agency (IAEA) phantom to reproduce these techniques. Images were acquired on a single Hologic Selenia Dimensions® unit. The mammography of the left breast of a single clinical case was included. Three methods of image acquisition were identified. They were based on misused, recommended, and reference settings. In the clinical case, image criteria scoring and the signal-to-noise ratio on breast tissue (SNRBT) were determined for two 2D projections and compared between the three techniques. The phantom study first compared the reference and misused settings by varying the AEC sensor position and, second, the recommended settings with a reduced current-time product (mAs) setting that was 13% lower. The signal-difference-to-noise ratio (SDNR) and detectability indexes at 0.1 mm (d’ 0.1 mm) and 0.25 mm (d’ 0.25 mm) were automatically quantified using ATIA software. Average glandular dose (AGD) values were collected for each acquisition. A statistical analysis was performed using Kruskal–Wallis and corrected Dunn tests (p < 0.05). The SNRBT was 2.6 times lower and the AGD was −18% lower with the reference settings compared to the recommended settings. The SNRBT values increased by +98% with the misused compared to the recommended settings. The AGD increased by +79% with the misused settings versus the recommended settings. The median values of the reference settings were 5.8 (IQR 5.7–5.9), 1.2 (IQR 0.0), 7.0 (IQR 6.8–7.2) and 1.2 (IQR 0.0) mGy and were significantly lower than those of the misused settings (p < 0.03): 7.9 (IQR 6.1–9.7), 1.6 (IQR 1.3–1.9), 9.2 (IQR 7.5–10.9) and 2.2 (IQR 1.4–3.0) mGy for the SDNR, d’ 0.1 mm, d’ 0.25 mm and the AGD, respectively. A comparison of the recommended and reduced settings showed a reduction of −6.1 ± 0.6% (p = 0.83), −7.7 ± 0.0% (p = 0.18), −6.4 ± 0.6% (p = 0.19) and −13.3 ± 1.1% (p = 0.53) for the SDNR, d’ 0.1 mm, d’ 0.25 mm and the AGD, respectively. This study showed that the IAEA phantom could be used to reproduce the three techniques for acquiring 2D mammography images in the presence of breast implants for raising awareness and for educational purposes. It could also be used to evaluate and optimize the manufacturer’s recommended settings. Full article
(This article belongs to the Special Issue Advances in Breast Cancer Imaging)
Show Figures

Figure 1

Figure 1
<p>Representation of the seven AEC sensor positions labeled 1 through 7 and marked with red squares on the images of PMMA (<b>a</b>) and IAEA (<b>b</b>) phantoms. C, N1, N2, CW1 and CW2 indicate the positions of the five regions of interest used for signal-to-noise measurements.</p>
Full article ">Figure 2
<p>(<b>a</b>) IAEA phantom and (<b>b</b>) mammographic image, with 1–2 indicating the uniform attenuator of poly(methyl methacrylate), 3 indicating copper plate and 4 indicating aluminium foil.</p>
Full article ">Figure 3
<p>Flow chart showing the different methods of image acquisition.</p>
Full article ">Figure 4
<p>Image comparison between two different mammograms performed in the same patient during follow-up, using the manual (<b>a</b>,<b>c</b>) and fully automatic (<b>b</b>,<b>d</b>) modes. (<b>a</b>,<b>b</b>) Left cranio caudal projection; (<b>c</b>,<b>d</b>) Left medio-lateral oblique projection. The squares displayed on images indicate the location of the AEC sensor for the automatic mode.</p>
Full article ">Figure 4 Cont.
<p>Image comparison between two different mammograms performed in the same patient during follow-up, using the manual (<b>a</b>,<b>c</b>) and fully automatic (<b>b</b>,<b>d</b>) modes. (<b>a</b>,<b>b</b>) Left cranio caudal projection; (<b>c</b>,<b>d</b>) Left medio-lateral oblique projection. The squares displayed on images indicate the location of the AEC sensor for the automatic mode.</p>
Full article ">Figure 5
<p>Influence on image quality, radiation dose and acquisition parameters of the seven positions of the AEC sensor labeled from 1 to 7. (<b>a</b>) Signal Difference to Noise Ratio, (<b>b</b>,<b>c</b>) Detectability index at 0.1 mm and 0.25 mm, (<b>d</b>) Tube current-time product, (<b>e</b>) Average Glandular Dose and (<b>f</b>) kilovoltage peaks.</p>
Full article ">Figure 5 Cont.
<p>Influence on image quality, radiation dose and acquisition parameters of the seven positions of the AEC sensor labeled from 1 to 7. (<b>a</b>) Signal Difference to Noise Ratio, (<b>b</b>,<b>c</b>) Detectability index at 0.1 mm and 0.25 mm, (<b>d</b>) Tube current-time product, (<b>e</b>) Average Glandular Dose and (<b>f</b>) kilovoltage peaks.</p>
Full article ">Figure 6
<p>Variation in phantom image quality parameters for three different tube current-time product levels: reference, manufacturer’s recommended and reduced. (<b>a</b>) Signal- Difference-to-Noise Ratio, (<b>b</b>) Average Glandular Dose, and (<b>c</b>,<b>d</b>) values for the detectability at 0.1 mm and 0.25 mm respectively.</p>
Full article ">
Back to TopTop