[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,977)

Search Parameters:
Keywords = range camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 11350 KiB  
Article
A Fast Obstacle Detection Algorithm Based on 3D LiDAR and Multiple Depth Cameras for Unmanned Ground Vehicles
by Fenglin Pang, Yutian Chen, Yan Luo, Zigui Lv, Xuefei Sun, Xiaobin Xu and Minzhou Luo
Drones 2024, 8(11), 676; https://doi.org/10.3390/drones8110676 (registering DOI) - 15 Nov 2024
Viewed by 250
Abstract
With the advancement of technology, unmanned ground vehicles (UGVs) have shown increasing application value in various tasks, such as food delivery and cleaning. A key capability of UGVs is obstacle detection, which is essential for avoiding collisions during movement. Current mainstream methods use [...] Read more.
With the advancement of technology, unmanned ground vehicles (UGVs) have shown increasing application value in various tasks, such as food delivery and cleaning. A key capability of UGVs is obstacle detection, which is essential for avoiding collisions during movement. Current mainstream methods use point cloud information from onboard sensors, such as light detection and ranging (LiDAR) and depth cameras, for obstacle perception. However, the substantial volume of point clouds generated by these sensors, coupled with the presence of noise, poses significant challenges for efficient obstacle detection. Therefore, this paper presents a fast obstacle detection algorithm designed to ensure the safe operation of UGVs. Building on multi-sensor point cloud fusion, an efficient ground segmentation algorithm based on multi-plane fitting and plane combination is proposed in order to prevent them from being considered as obstacles. Additionally, instead of point cloud clustering, a vertical projection method is used to count the distribution of the potential obstacle points through converting the point cloud to a 2D polar coordinate system. Points in the fan-shaped area with a density lower than a certain threshold will be considered as noise. To verify the effectiveness of the proposed algorithm, a cleaning UGV equipped with one LiDAR sensor and four depth cameras is used to test the performance of obstacle detection in various environments. Several experiments have demonstrated the effectiveness and real-time capability of the proposed algorithm. The experimental results show that the proposed algorithm achieves an over 90% detection rate within a 20 m sensing area and has an average processing time of just 14.1 ms per frame. Full article
Show Figures

Figure 1

Figure 1
<p>Overall process of the proposed algorithm.</p>
Full article ">Figure 2
<p>The schematic diagram of coordinate transformation. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <msub> <mrow> <mi mathvariant="normal">C</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math> represents the relative pose relationship from the <span class="html-italic">i</span>-th depth camera to the LiDAR sensor.</p>
Full article ">Figure 3
<p>The schematic diagram of ground point cloud segmentation.</p>
Full article ">Figure 4
<p>Schematic diagram of fan-shaped area retrieval.</p>
Full article ">Figure 5
<p>The loaded sensors. (<b>a</b>) Leishen C32W LiDAR; (<b>b</b>) Orbbec DaBai DCW2; (<b>c</b>) Orbbec Dabai MAX.</p>
Full article ">Figure 6
<p>The cleaning UGV equipped with these sensors.</p>
Full article ">Figure 7
<p>The vertical view of the fused point cloud in the main coordinate system (warehouse).</p>
Full article ">Figure 8
<p>The vertical view of the fused point cloud in the main coordinate system (parking).</p>
Full article ">Figure 9
<p>The performance of the ground segmentation effect by Patchwork++ in the warehouse environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 10
<p>The performance of the ground segmentation effect by DipG-Seg in the warehouse environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 11
<p>The performance of the ground segmentation effect by the proposed method in the warehouse environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 12
<p>The performance of the ground segmentation effect by Patchwork++ in the parking environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 13
<p>The performance of the ground segmentation effect by DipG-Seg in the parking environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 14
<p>The performance of the ground segmentation effect by the proposed method in the parking environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 15
<p>Detailed image of the ground segmentation effect of the proposed algorithm. (<b>a</b>) Warehouse; (<b>b</b>) parking.</p>
Full article ">Figure 16
<p>The vertical view of the obstacle detection effect using Euclidean clustering (warehouse).</p>
Full article ">Figure 17
<p>The vertical view of the obstacle detection effect using CenterPoint (warehouse).</p>
Full article ">Figure 18
<p>The vertical view of the obstacle detection effect using the proposed algorithm in smaller hyperparameter settings (warehouse).</p>
Full article ">Figure 19
<p>The vertical view of the obstacle detection effect using the proposed algorithm in larger hyperparameter settings (warehouse).</p>
Full article ">Figure 20
<p>The vertical view of the obstacle detection effect using Euclidean clustering (parking).</p>
Full article ">Figure 21
<p>The vertical view of the obstacle detection effect using CenterPoint (parking).</p>
Full article ">Figure 22
<p>The vertical view of the obstacle detection effect using the proposed algorithm in smaller hyperparameter settings (parking).</p>
Full article ">Figure 23
<p>The vertical view of the obstacle detection effect using the proposed algorithm in larger hyperparameter settings (parking).</p>
Full article ">
19 pages, 73341 KiB  
Article
A Comparative Study on the Use of Smartphone Cameras in Photogrammetry Applications
by Photis Patonis
Sensors 2024, 24(22), 7311; https://doi.org/10.3390/s24227311 - 15 Nov 2024
Viewed by 289
Abstract
The evaluation of smartphone camera technology for close-range photogrammetry includes assessing captured photos for 3D measurement. In this work, experiments are conducted on many smartphones to study distortion levels and accuracy performance in close-range photogrammetry applications. Analytical methods and specialized digital tools are [...] Read more.
The evaluation of smartphone camera technology for close-range photogrammetry includes assessing captured photos for 3D measurement. In this work, experiments are conducted on many smartphones to study distortion levels and accuracy performance in close-range photogrammetry applications. Analytical methods and specialized digital tools are employed to evaluate the results. OpenCV functions estimate the distortions introduced by the lens. Diagrams, evaluation images, statistical quantities, and indicators are utilized to compare the results among sensors. The accuracy achieved in photogrammetry is examined using the photogrammetric bundle adjustment in a real-world application. In the end, generalized conclusions are drawn regarding this technology’s use in close-range photogrammetry applications. Full article
Show Figures

Figure 1

Figure 1
<p>The layout of the control points used for the rectification procedure.</p>
Full article ">Figure 2
<p>A set of photographs showing the checkerboard pattern used for camera calibration.</p>
Full article ">Figure 3
<p>Distribution of Distortions for all smartphone cameras.</p>
Full article ">Figure 4
<p>Distortions color gradients to micrometers.</p>
Full article ">Figure 5
<p>Rectification Error color gradients to millimeters.</p>
Full article ">Figure 6
<p>The converged photos of the control field.</p>
Full article ">Figure 7
<p>A control point in the building’s facade. <b>Left</b>: In the actual photo. <b>Right</b>: Through the total station’s telescope.</p>
Full article ">Figure 8
<p>The transformation of the Control Point Coordinates to a New Reference System. (<b>a</b>) The new Reference Coordinate System. (<b>b</b>) The coordinates transformation on the Surveyor-Photogrammetry software version 6.1.</p>
Full article ">Figure 9
<p>An instance of the bundle adjustment function on the Surveyor-Photogrammetry software version 6.1.</p>
Full article ">Figure 10
<p>Normalized values of the Pixel size, Number of pixels, Sensor Area, RMS 3D error, Mean Rectification error, Re-projection error, and Mean Distortion.</p>
Full article ">Figure 11
<p>Scatter Charts for All Cameras with the trendline calculated by linear regression. (<b>a</b>): RMS 3D Error and Mean Distortion. (<b>b</b>) RMS 3D Error and Pixel Size. (<b>c</b>) RMS 3D Error and the Sensors Area. (<b>d</b>) Pixel Size and Sensor Area. (<b>e</b>) Pixel Size and Total Number of Pixels. (<b>f</b>) RMS 3D Error and Re-projection Error.</p>
Full article ">
15 pages, 59170 KiB  
Technical Note
Investigating Defect Detection in Advanced Ceramic Additive Manufacturing Using Active Thermography
by Anthonin Demarbaix, Enrique Juste, Tim Verlaine, Ilario Strazzeri, Julien Quinten and Arnaud Notebaert
NDT 2024, 2(4), 504-518; https://doi.org/10.3390/ndt2040031 - 15 Nov 2024
Viewed by 275
Abstract
Additive manufacturing of advanced materials has become widespread, encompassing a range of materials including thermoplastics, metals, and ceramics. For the ceramics, the complete production process typically involves indirect additive manufacturing, where the green ceramic part undergoes debinding and sintering to achieve its final [...] Read more.
Additive manufacturing of advanced materials has become widespread, encompassing a range of materials including thermoplastics, metals, and ceramics. For the ceramics, the complete production process typically involves indirect additive manufacturing, where the green ceramic part undergoes debinding and sintering to achieve its final mechanical and thermal properties. To avoid unnecessary energy-intensive steps, it is crucial to assess the internal integrity of the ceramic in its green stage. This study aims to investigate the use of active thermography for defect detection. The approach is to examine detectability using two benchmarks: the first focuses on the detectability threshold, and the second on typical defects encountered in 3D printing. For the first benchmark, reflection and transmission modes are tested with and without a camera angle to minimize reflection. The second benchmark will then be assessed using the most effective configurations identified. All defects larger than 1.2 mm were detectable across the benchmarks. The method can successfully detect defects, with transmission mode being more suitable since it does not require a camera angle adjustment to avoid reflections. However, the method struggles to detect typical 3D-printing defects because the minimum defect size is 0.6 mm, which is the size of the nozzle. Full article
(This article belongs to the Topic Nondestructive Testing and Evaluation)
Show Figures

Figure 1

Figure 1
<p>The three different mechanisms for MEX additive manufacturing.</p>
Full article ">Figure 2
<p>Physical principal of active thermography.</p>
Full article ">Figure 3
<p>External appearance of benchmarks at the end of 3D printing.</p>
Full article ">Figure 4
<p>Schematic view of benchmark B1 called benchmark “detection threshold”.</p>
Full article ">Figure 5
<p>Schematic view of benchmark B2 called benchmark “defect printing”.</p>
Full article ">Figure 6
<p>Schematic view of the experimental setup: (<b>a</b>) reflection mode and (<b>b</b>) transmission mode.</p>
Full article ">Figure 7
<p>Schematic view of the camera tilt in the experimental setup.</p>
Full article ">Figure 8
<p>PCT of benchmark B1 in 0° angle transmission with shape tracing.</p>
Full article ">Figure 9
<p>Measurement of the largest defects on PCT: (<b>a</b>) benchmark B1 and (<b>b</b>) benchmark B2.</p>
Full article ">
11 pages, 2683 KiB  
Communication
A Low-Cost Modulated Laser-Based Imaging System Using Square Ring Laser Illumination for Depressing Underwater Backscatter
by Yansheng Hao, Yaoyao Yuan, Hongman Zhang, Shao Zhang and Ze Zhang
Photonics 2024, 11(11), 1070; https://doi.org/10.3390/photonics11111070 - 14 Nov 2024
Viewed by 363
Abstract
Underwater vision data facilitate a variety of underwater operations, including underwater ecosystem monitoring, topographical mapping, mariculture, and marine resource exploration. Conventional laser-based underwater imaging systems with complex system architecture rely on high-cost laser systems with high power, and software-based methods can not enrich [...] Read more.
Underwater vision data facilitate a variety of underwater operations, including underwater ecosystem monitoring, topographical mapping, mariculture, and marine resource exploration. Conventional laser-based underwater imaging systems with complex system architecture rely on high-cost laser systems with high power, and software-based methods can not enrich the physical information captured by cameras. In this manuscript, a low-cost modulated laser-based imaging system is proposed with a spot in the shape of a square ring to eliminate the overlap between the illumination light path and the imaging path, which could reduce the negative effect of backscatter on the imaging process and enhance imaging quality. The imaging system is able to achieve underwater imaging at long distance (e.g., 10 m) with turbidity in the range of 2.49 to 7.82 NTUs, and the adjustable divergence angle of the laser tubes enables the flexibility of the proposed system to image on the basis of application requirements, such as the overall view or partial detail information of targets. Compared with a conventional underwater imaging camera (DS-2XC6244F, Hikvision, Hangzhou, China), the developed system could provide better imaging performance regarding visual effects and quantitative evaluation (e.g., UCIQUE and IE). Through integration with the CycleGAN-based method, the imaging results can be further improved, with the UCIQUE increased by 0.4. The proposed low-cost imaging system with a compact system structure and low consumption of energy could be equipped with platforms, such as underwater robots and AUVs, to facilitate real-world underwater applications. Full article
Show Figures

Figure 1

Figure 1
<p>Schematics of the underwater optical imaging process.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematics (<b>top</b>) and actual figure (<b>bottom</b>) of the modulated laser illumination system for underwater imaging. (<b>b</b>) Underwater experiment field figure of the modulated laser illumination system for underwater imaging (<b>bottom</b>) and square ring laser spot (<b>top</b>).</p>
Full article ">Figure 3
<p>(<b>a</b>) The block chain of the optoelectronic system. (<b>b</b>) The diagram (<b>left</b>) and actual figure (<b>right</b>) of the electrical control system based on STM32. (<b>c</b>) Flow chart of dedicated firmware.</p>
Full article ">Figure 4
<p>Comparison of original images captured by the camera with the illumination of the modulated laser (<b>top</b>) and the diverging laser (<b>bottom</b>) at different distances.</p>
Full article ">Figure 5
<p>Effects of the relationship between the FOV and MLDA on imaging: (<b>a</b>) FOV <math display="inline"><semantics> <mrow> <mo>&lt;</mo> </mrow> </semantics></math> MLDA, (<b>b</b>) FOV = MLDA, and (<b>c</b>) FOV <math display="inline"><semantics> <mrow> <mo>&gt;</mo> </mrow> </semantics></math> MLDA.</p>
Full article ">Figure 6
<p>Comparison of original images captured by DS-2XC6244F and the MLIS at different distances.</p>
Full article ">Figure 7
<p>Comparison of images captured by the MLIS and enhanced with the optimized algorithm with the average UCIQUE improved from 0.428 to 0.925.</p>
Full article ">
15 pages, 18745 KiB  
Article
Robust Adaptive Robotic Visual Servo Grasping with Guaranteed Field of View Constraints
by Liang Li, Junqi Luo, Peitao Hong, Wenhao Bai, Zhenyu Zhang and Liucun Zhu
Actuators 2024, 13(11), 457; https://doi.org/10.3390/act13110457 - 14 Nov 2024
Viewed by 264
Abstract
Visual servo grasping technology has garnered significant attention in intelligent manufacturing for its potential to enhance both the flexibility and precision of robotic operations. However, traditional approaches frequently encounter challenges such as task failure when visual features move outside the camera’s field of [...] Read more.
Visual servo grasping technology has garnered significant attention in intelligent manufacturing for its potential to enhance both the flexibility and precision of robotic operations. However, traditional approaches frequently encounter challenges such as task failure when visual features move outside the camera’s field of view (FoV) and system instability due to interaction matrix singularities, limiting the technology’s effectiveness in complex environments. This study introduces a novel control strategy that leverages an asymmetric time-varying performance function to address the issue of visual feature escape. By strictly limiting the range of feature error, our approach ensures that visual features consistently remain within the camera’s FoV, thereby enhancing both transient and steady-state system performance. Furthermore, we have developed an adaptive damped least squares controller that dynamically adjusts the damping term to mitigate numerical instability resulting from interaction matrix singularities. The effectiveness of our method has been validated through grasping experiments involving significant rotations around the camera’s optical axis and other complex movements. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

Figure 1
<p>The geometric model of a pinhole camera.</p>
Full article ">Figure 2
<p>Control block diagram of the proposed algorithm in this article.</p>
Full article ">Figure 3
<p>Case 1: Large rotation around the optical axis experiment. Grasping results using the IBVS method. (<b>a</b>) Initial pose. (<b>b</b>,<b>c</b>) Intermediate poses. (<b>d</b>) Final pose.</p>
Full article ">Figure 4
<p>Case 1: Large rotation around the optical axis experiment. Grasping results using the PPC-ADLS method. (<b>a</b>) Initial pose. (<b>b</b>) Intermediate pose. (<b>c</b>) Final pose. (<b>d</b>) Object grasped.</p>
Full article ">Figure 5
<p>Case 1: Large rotation around the optical axis experiment. Path of image features on the pixel plane. (<b>a</b>) IBVS method. (<b>b</b>) PPC-ADLS method. The dashed rectangle represents the preset camera FoV with corresponding pixel coordinates (<span class="html-italic">u</span><sub>min</sub> = 120, <span class="html-italic">v</span><sub>min</sub> = 120, <span class="html-italic">u</span><sub>max</sub> = 560, <span class="html-italic">v</span><sub>max</sub> = 470).</p>
Full article ">Figure 6
<p>Case 1: Large rotation around the optical axis experiment. Evolution of image feature error over time using the PPC-ADLS method (blue dashed line: upper limit of performance function; orange dashed line: lower limit of performance function).</p>
Full article ">Figure 7
<p>Case 1: Large rotation around the optical axis experiment. (<b>a)</b> Evolution of image feature error using the IBVS method. (<b>b</b>) Condition number of the robot end-effector trajectory in 3D space (IBVS with dashed line and PPC-ADLS with solid line).</p>
Full article ">Figure 8
<p>Case 2: Complex movement experiment. Grasping results using the IBVS method. (<b>a</b>) Initial pose. (<b>b</b>,<b>c</b>) Intermediate poses. (<b>d</b>) Final pose.</p>
Full article ">Figure 9
<p>Case 2: Complex movement experiment. Grasping results using the IBVS method. (<b>a</b>) Initial pose. (<b>b</b>) Intermediate pose. (<b>c</b>) Final pose. (<b>d</b>) Object grasped.</p>
Full article ">Figure 10
<p>Case 2: Complex movement experiment. Trajectory of image features on the pixel plane. (<b>a</b>) IBVS method. (<b>b</b>) PPC-ADLS method. The dashed rectangle represents the preset camera FoV, with corresponding pixel coordinates (<span class="html-italic">u</span><sub>min</sub> = 50, <span class="html-italic">v</span><sub>min</sub> = 50, <span class="html-italic">u</span><sub>max</sub> = 550, <span class="html-italic">v</span><sub>max</sub> = 470).</p>
Full article ">Figure 11
<p>Case 2: Complex movement experiment. Evolution of image feature error along with the corresponding performance boundaries using the PPC-ADLS method (blue dashed line: upper limit of performance function; orange dashed line: lower limit of performance function).</p>
Full article ">Figure 12
<p>Case 2: Complex movement experiment. (<b>a</b>) Evolution of image feature error using the IBVS method. (<b>b</b>) Condition number of the robot end-effector trajectory in 3D space (IBVS with dashed line and PPC-ADLS with solid line).</p>
Full article ">
31 pages, 2257 KiB  
Article
Evaluation of Cluster Algorithms for Radar-Based Object Recognition in Autonomous and Assisted Driving
by Daniel Carvalho de Ramos, Lucas Reksua Ferreira, Max Mauro Dias Santos, Evandro Leonardo Silva Teixeira, Leopoldo Rideki Yoshioka, João Francisco Justo and Asad Waqar Malik
Sensors 2024, 24(22), 7219; https://doi.org/10.3390/s24227219 - 12 Nov 2024
Viewed by 550
Abstract
Perception systems for assisted driving and autonomy enable the identification and classification of objects through a concentration of sensors installed in vehicles, including Radio Detection and Ranging (RADAR), camera, Light Detection and Ranging (LIDAR), ultrasound, and HD maps. These sensors ensure a reliable [...] Read more.
Perception systems for assisted driving and autonomy enable the identification and classification of objects through a concentration of sensors installed in vehicles, including Radio Detection and Ranging (RADAR), camera, Light Detection and Ranging (LIDAR), ultrasound, and HD maps. These sensors ensure a reliable and robust navigation system. Radar, in particular, operates with electromagnetic waves and remains effective under a variety of weather conditions. It uses point cloud technology to map the objects in front of you, making it easy to group these points to associate them with real-world objects. Numerous clustering algorithms have been developed and can be integrated into radar systems to identify, investigate, and track objects. In this study, we evaluate several clustering algorithms to determine their suitability for application in automotive radar systems. Our analysis covered a variety of current methods, the mathematical process of these methods, and presented a comparison table between these algorithms, including Hierarchical Clustering, Affinity Propagation Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), Density-Based Spatial Clustering of Applications with Noise (DBSCAN), Mini-Batch K-Means, K-Means Mean Shift, OPTICS, Spectral Clustering, and Gaussian Mixture. We have found that K-Means, Mean Shift, and DBSCAN are particularly suitable for these applications, based on performance indicators that assess suitability and efficiency. However, DBSCAN shows better performance compared to others. Furthermore, our findings highlight that the choice of radar significantly impacts the effectiveness of these object recognition methods. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>FMCW radar system block diagram.</p>
Full article ">Figure 2
<p>Basic topology of a radar system.</p>
Full article ">Figure 3
<p>Radar measurement range classification: Short Range Radar (SRR)/Middle Range Radar (MRR)/Long Range Radar (LRR).</p>
Full article ">Figure 4
<p>Architecture of the Automotive ECU-Radar with its components, technologies, and applications enabled for DA features.</p>
Full article ">Figure 5
<p>Information processing of automotive ECU-Radar.</p>
Full article ">Figure 6
<p>Neighborhood of a point: each point in a cluster has its neighborhood with a certain radius that contains at least a certain number of points.</p>
Full article ">Figure 7
<p>Direct Density Reach is when the object <span class="html-italic">p</span> is directly reachable by the density of object <span class="html-italic">q</span>, when <span class="html-italic">p</span> is <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>-neighborhood of <span class="html-italic">q</span>, and <span class="html-italic">q</span> is a midpoint.</p>
Full article ">Figure 8
<p>Reach by Density is defined when the object <span class="html-italic">p</span> is reachable by the density of object <span class="html-italic">q</span>, in a set D, if there is a chain of objects, such that <span class="html-italic">p</span> is reachable by density directly from <span class="html-italic">q</span> with respect to MinPts.</p>
Full article ">Figure 9
<p>The parameters of DBSCAN.</p>
Full article ">Figure 10
<p>Point clustering using K-Means algorithm.</p>
Full article ">Figure 11
<p>Mean Shift algorithm parameters.</p>
Full article ">Figure 12
<p>OPTICS algorithm parameters.</p>
Full article ">Figure 13
<p>Mixture of Gaussian: three Gaussian functions are illustrated, so K = 3. Each Gaussian explains the data contained in each of the three available clusters.</p>
Full article ">Figure 14
<p>Automotive radar process from point cloud to cluster and object detection and recognition.</p>
Full article ">Figure 15
<p>Process for applying clustering radar system.</p>
Full article ">Figure 16
<p>Radar detecting pedestrians in Driving Scenario Design.</p>
Full article ">Figure 17
<p>Radar detecting cyclist in Driving Scenario Design.</p>
Full article ">Figure 18
<p>Radar detecting a stopped vehicle in Driving Scenario Design.</p>
Full article ">Figure 19
<p>Radar detecting a moving vehicle in Driving Scenario Design.</p>
Full article ">Figure 20
<p>Radar detecting many objects in Driving Scenario Design.</p>
Full article ">Figure 21
<p>DBSCAN recognizing many objects.</p>
Full article ">Figure 22
<p>Comparing clustering algorithm.</p>
Full article ">Figure 23
<p>Performance of algorithms in tests.</p>
Full article ">
21 pages, 7841 KiB  
Article
Research on a Method for Measuring the Pile Height of Materials in Agricultural Product Transport Vehicles Based on Binocular Vision
by Wang Qian, Pengyong Wang, Hongjie Wang, Shuqin Wu, Yang Hao, Xiaoou Zhang, Xinyu Wang, Wenyan Sun, Haijie Guo and Xin Guo
Sensors 2024, 24(22), 7204; https://doi.org/10.3390/s24227204 - 11 Nov 2024
Viewed by 359
Abstract
The advancement of unloading technology in combine harvesting is crucial for the intelligent development of agricultural machinery. Accurately measuring material pile height in transport vehicles is essential, as uneven accumulation can lead to spillage and voids, reducing loading efficiency. Relying solely on manual [...] Read more.
The advancement of unloading technology in combine harvesting is crucial for the intelligent development of agricultural machinery. Accurately measuring material pile height in transport vehicles is essential, as uneven accumulation can lead to spillage and voids, reducing loading efficiency. Relying solely on manual observation for measuring stack height can decrease harvesting efficiency and pose safety risks due to driver distraction. This research applies binocular vision to agricultural harvesting, proposing a novel method that uses a stereo matching algorithm to measure material pile height during harvesting. By comparing distance measurements taken in both empty and loaded states, the method determines stack height. A linear regression model processes the stack height data, enhancing measurement accuracy. A binocular vision system was established, applying Zhang’s calibration method on the MATLAB (R2019a) platform to correct camera parameters, achieving a calibration error of 0.15 pixels. The study implemented block matching (BM) and semi-global block matching (SGBM) algorithms using the OpenCV (4.8.1) library on the PyCharm (2020.3.5) platform for stereo matching, generating disparity, and pseudo-color maps. Three-dimensional coordinates of key points on the piled material were calculated to measure distances from the vehicle container bottom and material surface to the binocular camera, allowing for the calculation of material pile height. Furthermore, a linear regression model was applied to correct the data, enhancing the accuracy of the measured pile height. The results indicate that by employing binocular stereo vision and stereo matching algorithms, followed by linear regression, this method can accurately calculate material pile height. The average relative error for the BM algorithm was 3.70%, and for the SGBM algorithm, it was 3.35%, both within the acceptable precision range. While the SGBM algorithm was, on average, 46 ms slower than the BM algorithm, both maintained errors under 7% and computation times under 100 ms, meeting the real-time measurement requirements for combine harvesting. In practical operations, this method can effectively measure material pile height in transport vehicles. The choice of matching algorithm should consider container size, material properties, and the balance between measurement time, accuracy, and disparity map completeness. This approach aids in manual adjustment of machinery posture and provides data support for future autonomous master-slave collaborative operations in combine harvesting. Full article
(This article belongs to the Special Issue AI, IoT and Smart Sensors for Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Principle of triangulation.</p>
Full article ">Figure 2
<p>Zhang’ calibration steps.</p>
Full article ">Figure 3
<p>The corner extraction results of the checkerboard. (<b>a</b>) Calibration paper; (<b>b</b>) Calibration plate.</p>
Full article ">Figure 4
<p>The relative position between the binocular camera and the calibration board. (<b>a</b>) Calibration paper; (<b>b</b>) calibration plate.</p>
Full article ">Figure 5
<p>The reprojection errors of the chessboard calibration. (<b>a</b>) Calibration paper; (<b>b</b>) calibration plate.</p>
Full article ">Figure 6
<p>Polar correction. (<b>a</b>) Before correction; (<b>b</b>) after correction.</p>
Full article ">Figure 6 Cont.
<p>Polar correction. (<b>a</b>) Before correction; (<b>b</b>) after correction.</p>
Full article ">Figure 7
<p>Basic workflow of stereo matching.</p>
Full article ">Figure 8
<p>Method for measuring the height of piled materials.</p>
Full article ">Figure 9
<p>The process of measuring the piled height of potatoes.</p>
Full article ">Figure 10
<p>The image under no-load conditions. (<b>a</b>) Left image; (<b>b</b>) right image; (<b>c</b>) BM disparity map; (<b>d</b>) BM pseudo-color map; (<b>e</b>) SGBM disparity map; (<b>f</b>) SGBM pseudo-color map.</p>
Full article ">Figure 11
<p>Images of three different load conditions.</p>
Full article ">Figure 11 Cont.
<p>Images of three different load conditions.</p>
Full article ">Figure 12
<p>The distance measurement results between the surface of stacked potatoes and the stereo camera under three different conditions. (<b>a</b>) State 1; (<b>b</b>) state 2; (<b>c</b>) state 3.</p>
Full article ">Figure 12 Cont.
<p>The distance measurement results between the surface of stacked potatoes and the stereo camera under three different conditions. (<b>a</b>) State 1; (<b>b</b>) state 2; (<b>c</b>) state 3.</p>
Full article ">Figure 13
<p>Regression model and evaluation metrics. (<b>a</b>) BM measurement values and calibrated values; (<b>b</b>) SGBM measurement values and calibrated values; (<b>c</b>) residual plot of the BM regression model; (<b>d</b>) residual plot of the SGBM regression model.</p>
Full article ">Figure 13 Cont.
<p>Regression model and evaluation metrics. (<b>a</b>) BM measurement values and calibrated values; (<b>b</b>) SGBM measurement values and calibrated values; (<b>c</b>) residual plot of the BM regression model; (<b>d</b>) residual plot of the SGBM regression model.</p>
Full article ">Figure 14
<p>Comparison of pile heights and errors before and after calibration.</p>
Full article ">
22 pages, 5336 KiB  
Review
A Review of the Measurement of the Multiphase Slug Frequency
by Ronaldo Luís Höhn, Abderraouf Arabi, Youssef Stiriba and Jordi Pallares
Processes 2024, 12(11), 2500; https://doi.org/10.3390/pr12112500 - 11 Nov 2024
Viewed by 398
Abstract
The slug frequency (SF), which refers to the number of liquid slugs passing through a pipe during a specific time, is an important parameter for characterizing the multiphase intermittent flows and monitoring some process involving this kind of flow. The simplicity of the [...] Read more.
The slug frequency (SF), which refers to the number of liquid slugs passing through a pipe during a specific time, is an important parameter for characterizing the multiphase intermittent flows and monitoring some process involving this kind of flow. The simplicity of the definition of SF contrasts with the difficulty of correctly measuring it. This manuscript aims to review and discuss the various techniques and methods developed to determine the slug frequency experimentally. This review significantly reveals the absence of a universal measurement method applicable to a wide range of operating conditions. Thus, the recourse to recording videos with high-speed cameras, which can be used only at a laboratory scale, remains often necessary. From the summarized state-of-the-art, it appears that correctly defining the threshold values for detecting the liquid slugs/elongated bubbles interface from physical parameters time series, increasing the applicability of instrumentations at industrial scales, and properly estimating the uncertainties are the challenges that have to be faced to advance in the measurement of SF. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

Figure 1
<p>Schematic presentation of the slug flow for different pipe inclination range. The intrinsic slug parameters are also displayed. Reprinted from [<a href="#B8-processes-12-02500" class="html-bibr">8</a>]. Reprinted from <span class="html-italic">Flow Measurement and Instrumentation</span>, vol. 72, O. Cazarez-Candia, O.C. Benítez-Centeno, Comprehensive experimental study of liquid-slug length and Taylor-bubble velocity in slug flow, page 2, Copyright © 2020, with permission from Elsevier.</p>
Full article ">Figure 2
<p>Schematic diagram of the content discussed in the present review.</p>
Full article ">Figure 3
<p>Example of illumination system used for camera video recordings used by Mohmmed et al. [<a href="#B22-processes-12-02500" class="html-bibr">22</a>]. Reprinted from <span class="html-italic">International Journal of Pressure Vessels and Piping</span>, vol. 172, Abdalellah O. Mohmmed, Hussain H. Al-Kayiem, Mohammad S. Nasif, Rune W. Time, Effect of slug flow frequency on the mechanical stress behavior of pipelines, page 3, Copyright © 2019, with permission from Elsevier.</p>
Full article ">Figure 4
<p>Elongated bubble location using LSI method (air–water, 30 mm ID, <span class="html-italic">θ</span> = 0°, <span class="html-italic">V<sub>SL</sub></span> = 1.5 m/s and <span class="html-italic">V<sub>SG</sub></span> = 0.58 m/s). Reprinted from Sassi et al. [<a href="#B18-processes-12-02500" class="html-bibr">18</a>]. Copyright © 2022, Paulo Sassi et al.</p>
Full article ">Figure 5
<p>Example of void fraction time series collected by Maldonado et al. [<a href="#B34-processes-12-02500" class="html-bibr">34</a>] using a capacitive wire-mesh sensor showing the region of Taylor bubble and liquid slug. Reprinted from <span class="html-italic">Experimental Thermal and Fluid Science</span>, vol. 151, Paul A. D. Maldonado, Carolina C. Rodrigues, Ernesto Mancilla, Eduardo N. dos Santos, Roberto da Fonseca Junior, Moises A. Marcelino Neto, Marco J. da Silva, Rigoberto E. M. Morales, Spatial distribution of void fraction in the liquid slug in vertical Gas-Liquid slug flow, page 5, Copyright © 2023, with permission from Elsevier.</p>
Full article ">Figure 6
<p>Examples of (<b>a</b>) conductance probe; (<b>b</b>) ECT; (<b>c</b>) WMS used in Zhao et al. [<a href="#B86-processes-12-02500" class="html-bibr">86</a>] (Copyright ©2016, Zhao et al.); and (<b>d</b>) wrapped fiber cable of distributed acoustic sensing (DAS) used by Ali et al. [<a href="#B8-processes-12-02500" class="html-bibr">8</a>] (Copyright ©2024, Ali et al.).</p>
Full article ">Figure 7
<p>Example of pressure drop time series showing the peaks induced by the passage of liquid slugs (orange crosses) and by the presence of different kinds of flow structures (waves, roll waves or pseudo slugs) (green crosses). Copyright ©2022, Sassi et al. Note that the green crosses are added and not part of the original source.</p>
Full article ">Figure 8
<p>Experimental holdup time series with depiction of the threshold to detect the passage of liquid slugs. Reproduced from Eyo and Lao [<a href="#B124-processes-12-02500" class="html-bibr">124</a>]. Reprinted from <span class="html-italic">AICHE Journal</span>, vol. 65, e16711, Edem N. Eyo, Liyun Lao, Slug flow characterization in horizontal annulus, page 5, Copyright © 2019 American Institute of Chemical Engineers, with permission from WILEY.</p>
Full article ">Figure 9
<p>Threshold cut value applied to a normalized voltage or an instantaneous liquid holdup time series. (<b>a</b>) using a single threshold value (<span class="html-italic">TV</span>) and (<b>b</b>) using one threshold value for slug region (<span class="html-italic">TV<sub>slug</sub></span>) and the other for the film (<span class="html-italic">TV<sub>film</sub></span>). Reprinted from <span class="html-italic">Flow Measurement and Instrumentation</span>, vol. 79, Gabriel Soto-Cortes, Eduardo Pereyra, Cem Sarica, Carlos Torres, Auzan Soedarmo, Signal processing for slug flow analysis via a voltage or instantaneous liquid holdup time series, page 2, Copyright © 2021, with permission from Elsevier.</p>
Full article ">Figure 10
<p>Example of binarized liquid holdup signals showing the translational times of liquid slug (<span class="html-italic">T<sub>ls</sub></span>) and elongated bubble (<span class="html-italic">T<sub>eb</sub></span>).</p>
Full article ">Figure 11
<p>Example of frequency spectrum obtained by applying PSD to the void fraction time series. Taken from [<a href="#B141-processes-12-02500" class="html-bibr">141</a>]. Reprinted from <span class="html-italic">International Journal of Multiphase Flow</span>, vol. 37 (8), Wael H. Ahmed, Experimental investigation of air–oil slug flow using capacitance probes, hot-film anemometer, and image processing, page 886, Copyright © 2011, with permission from Elsevier.</p>
Full article ">Figure 12
<p>Comparison between the measured slug frequency using W&amp;M method applied for two pressure tap distances and LSI method obtained by Sassi et al. [<a href="#B17-processes-12-02500" class="html-bibr">17</a>]. Copyright © 2022, Paulo Sassi et al.</p>
Full article ">
14 pages, 1291 KiB  
Article
Determining Validity and Reliability of an In-Field Performance Analysis System for Swimming
by Dennis-Peter Born, Marek Polach and Craig Staunton
Sensors 2024, 24(22), 7186; https://doi.org/10.3390/s24227186 - 9 Nov 2024
Viewed by 367
Abstract
To permit the collection of quantitative data on start, turn and clean swimming performances in any swimming pool, the aims of the present study were to (1) validate a mobile in-field performance analysis system (PAS) against the Kistler starting block equipped with force [...] Read more.
To permit the collection of quantitative data on start, turn and clean swimming performances in any swimming pool, the aims of the present study were to (1) validate a mobile in-field performance analysis system (PAS) against the Kistler starting block equipped with force plates and synchronized to a 2D camera system (KiSwim, Kistler, Winterthur, Switzerland), (2) assess the PAS’s interrater reliability and (3) provide percentiles as reference values for elite junior and adult swimmers. Members of the Swiss junior and adult national swimming teams including medalists at Olympic Games, World and European Championships volunteered for the present study (n = 47; age: 17 ± 4 [range: 13–29] years; World Aquatics Points: 747 ± 100 [range: 527–994]). All start and turn trials were video-recorded and analyzed using two methods: PAS and KiSwim. The PAS involves one fixed view camera recording overwater start footage and a sport action camera that is moved underwater along the side of the pool perpendicular to the swimming lane on a 1.55 m long monostand. From a total of 25 parameters determined with the PAS, 16 are also measurable with the KiSwim, of which 7 parameters showed satisfactory validity (r = 0.95–1.00, p < 0.001, %-difference < 1%). Interrater reliability was determined for all 25 parameters of the PAS and reliability was accepted for 21 of those start, turn and swimming parameters (ICC = 0.78–1.00). The percentiles for all valid and reliable parameters provide reference values for assessment of start, turn and swimming performance for junior and adult national team swimmers. The in-field PAS provides a mobile method to assess start, turn and clean swimming performance with high validity and reliability. The analysis template and manual included in the present article aid the practical application of the PAS in research and development projects as well as academic works. Full article
Show Figures

Figure 1

Figure 1
<p>Set-up and camera path of a sport action camera to capture (<b>a</b>) start and (<b>b</b>) turn trials.</p>
Full article ">Figure 2
<p>Validity analysis for start performance using Bland–Altman plots with a 95% confidence interval for the difference between the methods (PAS—KiSwim values) and limits of agreement. Values on the x-axis show the means of the two methods.</p>
Full article ">Figure 3
<p>Validity analysis for turn performance using Bland–Altman plots with a 95% confidence interval for the difference between the methods (PAS–KiSwim values) and limits of agreement. Values on the x-axis show the means of the two methods.</p>
Full article ">
14 pages, 5250 KiB  
Article
Thermal Management of Friction-Drilled A356 Aluminum Alloy: A Study of Preheating and Drilling Parameters
by Ahmed Abdalkareem, Rasha Afify, Nadia Hamzawy, Tamer S. Mahmoud and Mahmoud Khedr
J. Manuf. Mater. Process. 2024, 8(6), 251; https://doi.org/10.3390/jmmp8060251 - 8 Nov 2024
Viewed by 402
Abstract
Friction drilling is a non-conventional process that generates heat through the interaction between a rotating tool and a workpiece, forming a hole with a bushing. In this study, the effect of the preheating temperature, rotational speed, and feed rate on the induced temperature [...] Read more.
Friction drilling is a non-conventional process that generates heat through the interaction between a rotating tool and a workpiece, forming a hole with a bushing. In this study, the effect of the preheating temperature, rotational speed, and feed rate on the induced temperature during the friction drilling of A356 aluminum alloy was investigated. This study aimed to analyze the influence of friction-drilling parameters on the thermal conditions in the induced bushing, where it focused on the relationship between preheating and the resulting heat generation. The analysis of variance (ANOVA) approach was carried out to optimize the friction-drilling parameters that contributed most to the induced temperature during the friction-drilling processing. Experiments were conducted at various preheating temperatures (100 °C, 150 °C, 200 °C), rotational speeds (2000 rpm, 3000 rpm, 4000 rpm), and feed rates (40 mm/min, 60 mm/min, 80 mm/min). The induced temperature during the process was recorded using an infrared camera, where the observed temperatures ranged from a minimum of 154.4 °C (at 2000 rpm, 60 mm/min, and 100 °C preheating) to a maximum of 366.8 °C (at 4000 rpm, 40 mm/min, and 200 °C preheating). The results show that preheating increased the peak temperature generated in the bushing during friction drilling, especially at lower rotational speeds. The rotational speed rise led to an increase in the induced temperature. However, the increase in the feed rate resulted in a decrease in the observed temperature. The findings provide insights into optimizing friction-drilling parameters for enhanced thermal management in A356 aluminum alloy. Full article
Show Figures

Figure 1

Figure 1
<p>Preparation steps of the A356 Al-alloy sheet plates: (<b>a</b>) the as-received ingot melted in an electrical furnace, (<b>b</b>) the molten metal poured inside the steel mold, and (<b>c</b>) the sheet plates cut by EDM.</p>
Full article ">Figure 2
<p>Schematic drawing representing the geometry and regions of the thermal drilling tool.</p>
Full article ">Figure 3
<p>The CNC vertical milling machine used to accomplish the friction-drilling processes is represented with a magnified image of the fixture, tool, and workpiece.</p>
Full article ">Figure 4
<p>A thermal image captured via the infrared camera displaying the temperature distribution of a specimen preheated at 200 °C before the friction-drilling processing.</p>
Full article ">Figure 5
<p>Temperature recording and distribution during the friction-drilling stages under working conditions of a rotational speed at 4000 rpm, a feed rate of 60 mm/min, and preheating temperature of 200 °C: (<b>a</b>) centering, (<b>b</b>) tool penetration, (<b>c</b>) processing of the hole, (<b>d</b>) the tool-retracting stage, (<b>e</b>) complete bushing formation, and (<b>f</b>) tool removal.</p>
Full article ">Figure 6
<p>The maximum induced temperature at the tool/workpiece interface during the friction drilling of A356 Al alloy at different preheating temperatures, rotational speeds, and feed rates.</p>
Full article ">Figure 7
<p>(<b>a</b>) Main effects plots and (<b>b</b>) Pareto chart of the friction-drilling parameters affecting the observing temperature in the produced bushings.</p>
Full article ">Figure 8
<p>Normal probability plot and residual graphs: (<b>a</b>) normal probability plot for residuals, (<b>b</b>) residuals vs. fitted values, (<b>c</b>) histograms of residuals, and (<b>d</b>) residuals vs. the order of the data.</p>
Full article ">
9 pages, 2458 KiB  
Article
Assessment of a New Gait Asymmetry Index in Patients After Unilateral Total Hip Arthroplasty
by Jarosław Kabaciński, Lechosław B. Dworak and Michał Murawa
J. Clin. Med. 2024, 13(22), 6677; https://doi.org/10.3390/jcm13226677 - 7 Nov 2024
Viewed by 319
Abstract
Background/Objectives: Comparing a given variable between the lower extremities (LEs) usually involves calculating the value of a selected asymmetry index. The aim of this study was to evaluate the mean-dependent asymmetry index for gait variables. Methods: The three-point crutch gait asymmetry between the [...] Read more.
Background/Objectives: Comparing a given variable between the lower extremities (LEs) usually involves calculating the value of a selected asymmetry index. The aim of this study was to evaluate the mean-dependent asymmetry index for gait variables. Methods: The three-point crutch gait asymmetry between the non-surgical LE (NS) and surgical LE (S) was assessed in 14 patients after unilateral total hip arthroplasty. An eight-camera motion capture system integrated with two force platforms was used. The values of the new gait asymmetry index (MA) were calculated for such variables as stance phase time (ST), knee flexion and extension range of motion (KFE RoM), hip flexion and extension range of motion (HFE RoM), and vertical ground reaction force (VGRF). Results: An analysis related to gait asymmetry showed significantly higher values for all variables for the NS than for the S (the MA ranged from 9.9 to 42.0%; p < 0.001). In the case of comparisons between the MA and other indices, the intraclass correlation coefficient ranged from 0.566 to 0.998 (p < 0.001) with Bland–Altman bias values that ranged from −18.2 to 0.3 %GC (ST), from 0.0 to 0.5° (KFE RoM), from −12.4 to 1.4° (HFE RoM), and from −11.9 to −0.1 %BW (VGRF). Conclusions: The findings revealed a prominent three-point crutch gait asymmetry for all variables, especially a disturbingly large asymmetry for the HFE RoM and VGRF. The comparisons also showed generally excellent or good agreement with the other indices. Furthermore, the mean MA result from n single values was the same as the MA result calculated using the mean values of a given variable. The MA, as an accurate asymmetry index, can be used to objectively assess pathological gait asymmetry. Full article
Show Figures

Figure 1

Figure 1
<p>The ICCs for the absolute agreement between the MA and the RI, GA, SI, and NSI. ICC—the intraclass correlation coefficient, MA—the new asymmetry index, RI—the ratio index, GA—the gait asymmetry index, SI—the symmetry index, NSI—the normalized symmetry index, ST—stance phase time, KFE RoM—knee flexion and extension range of motion, HFE RoM—hip flexion and extension range of motion, VGRF—vertical ground reaction force.</p>
Full article ">Figure 2
<p>Bland–Altman plots for the MA vs. the RI, the MA vs. the GA, the MA vs. the SI, and the MA vs. the NSI for (<b>a</b>) ST, (<b>b</b>) KFE RoM, (<b>c</b>) HFE RoM, and (<b>d</b>) VGRF. MA—the new asymmetry index, RI—the ratio index, GA—the gait asymmetry index, SI—the symmetry index, NSI—the normalized symmetry index, ST—stance phase time, KFE RoM—knee flexion and extension range of motion, HFE RoM—hip flexion and extension range of motion, VGRF—vertical ground reaction force, SD—the standard deviation for the average difference between the two indices, +1.96 * SD—the upper limit of agreement, −1.96 * SD—the lower limit of agreement, bias—the average difference between the two indices.</p>
Full article ">
14 pages, 4716 KiB  
Article
Development of a DualEmission Laser-Induced Fluorescence (DELIF) Method for Long-Term Temperature Measurements
by Koji Toriyama, Shumpei Funatani and Shigeru Tada
Sensors 2024, 24(22), 7136; https://doi.org/10.3390/s24227136 - 6 Nov 2024
Viewed by 333
Abstract
The fluorescence intensity of fluorescent dyes typically employed in the dual-emission laser-induced fluorescence (DELIF) method gradually degrades as the excitation time increases, and the degradation rate depends on the type of fluorescent dye used. Therefore, the DELIF method is unsuitable for long-term temperature [...] Read more.
The fluorescence intensity of fluorescent dyes typically employed in the dual-emission laser-induced fluorescence (DELIF) method gradually degrades as the excitation time increases, and the degradation rate depends on the type of fluorescent dye used. Therefore, the DELIF method is unsuitable for long-term temperature measurements. In this study, we focused on the fluorescence intensity ratio of a single fluorescent dye at two fluorescence wavelengths and developed a DELIF method for long-term temperature measurements based on this ratio. The fluorescence intensity characteristics of Fluorescein disodium and Rhodamine B, which are typically used in the DELIF method, in the temperature range of 10–60 °C were comprehensively investigated using two high-speed monochrome complementary metal-oxide semiconductor cameras and narrow bandpass filters. Interestingly, the ratio of the fluorescence intensity of each fluorescent dye at the peak emission wavelength of the fluorescence spectrum, λ, to the fluorescence intensity at wavelengths very close to the peak wavelength (λ ± 10 nm) was highly sensitive to temperature variations but not excitation time. Particularly, when Rhodamine B was used, the selection of the fluorescence intensity ratios at a wavelength combination of 589 and 600 nm enabled highly accurate temperature measurements with a temperature resolution of ≤0.042 °C. Moreover, the fluorescence intensity ratio varied negligibly throughout the excitation time of 180 min, with a measurement uncertainty (95% confidence interval) of 0.045 °C at 20 °C. The results demonstrate that the proposed DELIF method enables highly accurate long-term temperature measurements. Full article
(This article belongs to the Collection Recent Advances in Fluorescent Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental setup for fluorescence intensity measurements.</p>
Full article ">Figure 2
<p>Fluorescence intensity variations of Rhodamine B, <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>/</mo> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>, with excitation time <math display="inline"><semantics> <mrow> <mi>t</mi> </mrow> </semantics></math> at <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> °C.</p>
Full article ">Figure 3
<p>Experimental setup for fluorescence intensity variation measurements of Fluorescence disodium and Rhodamine B at temperature <math display="inline"><semantics> <mrow> <mi>T</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Relationship between the wavelength <math display="inline"><semantics> <mrow> <mi>λ</mi> </mrow> </semantics></math> and fluorescence intensity at various temperatures <math display="inline"><semantics> <mrow> <mi>T</mi> </mrow> </semantics></math>, for (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 5
<p>Relationship between the temperature and fluorescence intensity ratios of Fluorescein disodium for wavelength combinations of 500/510 nm and 520/510 nm and of Rhodamine B for wavelength combinations of 580/589 nm and 600/589 nm obtained using a spectrometer.</p>
Full article ">Figure 6
<p>Experimental setup for the DELIF method with one dye. Two cameras and bandpass filters were used in this experiment.</p>
Full article ">Figure 7
<p>(<b>a</b>) Relationship between the temperature and fluorescence intensity and, (<b>b</b>) Relationship between the temperature and rate of fluorescence intensity variation with temperature, at different emission wavelengths for Fluorescein disodium (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo> </mo> </mrow> </semantics></math> = 500, 510, and 520 nm) and Rhodamine B (<math display="inline"><semantics> <mrow> <mi>λ</mi> </mrow> </semantics></math> = 580, 589, and 600 nm).</p>
Full article ">Figure 8
<p>Relationship between the temperature and fluorescence intensity ratios of Fluorescein disodium for wavelength combinations of 500/510 nm and 520/510 nm and of Rhodamine B for wavelength combinations of 580/589 nm and 600/589 nm obtained using CMOS cameras and bandpass filters.</p>
Full article ">Figure 9
<p>Relationships between the temperature and temperature resolutions of Fluorescein disodium for wavelength combinations of 500 and 510 nm and 520 and 510 nm, and of Rhodamine B for wavelength combinations of 580 and 589 nm and 600 and 589 nm.</p>
Full article ">Figure 10
<p>Relationships between the excitation time and fluorescence intensity ratios at 20 °C Fluorescein disodium for wavelength combinations of 500 and 510 nm and 520 and 510 nm, and of Rhodamine B for wavelength combinations of 580 and 589 nm and 600 and 589 nm.</p>
Full article ">
19 pages, 16743 KiB  
Article
Low-Cost and Contactless Survey Technique for Rapid Pavement Texture Assessment Using Mobile Phone Imagery
by Zhenlong Gong, Marco Bruno, Margherita Pazzini, Anna Forte, Valentina Alena Girelli, Valeria Vignali and Claudio Lantieri
Sustainability 2024, 16(22), 9630; https://doi.org/10.3390/su16229630 - 5 Nov 2024
Viewed by 493
Abstract
Collecting pavement texture information is crucial to understand the characteristics of a road surface and to have essential data to support road maintenance. Traditional texture assessment techniques often require expensive equipment and complex operations. To ensure cost sustainability and reduce traffic closure times, [...] Read more.
Collecting pavement texture information is crucial to understand the characteristics of a road surface and to have essential data to support road maintenance. Traditional texture assessment techniques often require expensive equipment and complex operations. To ensure cost sustainability and reduce traffic closure times, this study proposes a rapid, cost-effective, and non-invasive surface texture assessment technique. This technology consists of capturing a set of images of a road surface with a mobile phone; then, the images are used to reconstruct the 3D surface with photogrammetric processing and derive the roughness parameters to assess the pavement texture. The results indicate that pavement images taken by a mobile phone can reconstruct the 3D surface and extract texture features with accuracy, meeting the requirements of a time-effective documentation. To validate the effectiveness of this technique, the surface structure of the pavement was analyzed in situ using a 3D structured light projection scanner and rigorous photogrammetry with a high-end reflex camera. The results demonstrated that increasing the point cloud density can enhance the detail level of the real surface 3D representation, but it leads to variations in road surface roughness parameters. Therefore, appropriate density should be chosen when performing three-dimensional reconstruction using mobile phone images. Mobile phone photogrammetry technology performs well in detecting shallow road surface textures but has certain limitations in capturing deeper textures. The texture parameters and the Abbott curve obtained using all three methods are comparable and fall within the same range of acceptability. This finding demonstrates the feasibility of using a mobile phone for pavement texture assessments with appropriate settings. Full article
Show Figures

Figure 1

Figure 1
<p>Asphalt mixture grading curves.</p>
Full article ">Figure 2
<p>The overall workflow of CRP.</p>
Full article ">Figure 3
<p>(<b>a</b>) Parallel axis capture; (<b>b</b>) Schematic diagram of the shooting platform.</p>
Full article ">Figure 4
<p>Example of an image acquired by the reflex camera and containing coded targets.</p>
Full article ">Figure 5
<p>(<b>a</b>) The Structured-light scanner employed; (<b>b</b>) A 3D point cloud obtained.</p>
Full article ">Figure 6
<p>Image of pavement sweeping site.</p>
Full article ">Figure 7
<p>Results of dense point cloud from CRP technique based on mobile phone.</p>
Full article ">Figure 8
<p>Cloud maps with different point cloud sizes and cloud maps from scanner.</p>
Full article ">Figure 9
<p>Abbott curves from different point cloud sizes and from the scanner.</p>
Full article ">Figure 10
<p>Abbott curves for results of different methods in five locations.</p>
Full article ">Figure 11
<p>Roughness parameters for different locations.</p>
Full article ">Figure 12
<p>The results of cloud maps at different locations by CRP based on mobile phone.</p>
Full article ">Figure 13
<p>Mobile phone point cloud of Location 4, represented with a color gradient showing the Z values (in mm) differences concerning the cloud scanned with SLS.</p>
Full article ">Figure 14
<p>The results of cloud maps in case of contamination.</p>
Full article ">Figure 15
<p>Abbott curves for results of different methods of four samples.</p>
Full article ">Figure 16
<p>Roughness parameters for different locations in case of contamination.</p>
Full article ">
12 pages, 1275 KiB  
Article
A Simple and Green Analytical Alternative for Chloride Determination in High-Salt-Content Crude Oil: Combining Miniaturized Extraction with Portable Colorimetric Analysis
by Alice P. Holkem, Giuliano Agostini, Adilson B. Costa, Juliano S. Barin and Paola A. Mello
Processes 2024, 12(11), 2425; https://doi.org/10.3390/pr12112425 - 3 Nov 2024
Viewed by 867
Abstract
A simple and miniaturized protocol was developed for chloride extraction from Brazilian pre-salt crude oil for further salt determination by colorimetry. In this protocol, the colorimetric analysis of chloride using digital images was carried out in an aqueous phase obtained after a simple [...] Read more.
A simple and miniaturized protocol was developed for chloride extraction from Brazilian pre-salt crude oil for further salt determination by colorimetry. In this protocol, the colorimetric analysis of chloride using digital images was carried out in an aqueous phase obtained after a simple and miniaturized extraction carefully developed for this purpose. A portable device composed of a homemade 3D-printed chamber with a USB camera was used. The PhotoMetrix app converted the images into RGB histograms, and a partial least squares (PLS) model was obtained from chemometric treatment. The sample preparation was performed by extraction after defining the best conditions for the main parameters (e.g., extraction time, temperature, type and volume of solvent, and sample mass). The PLS model was evaluated considering the coefficient of determination (R2) and the root mean square errors (RMSEs)—calibration (RMSEC), cross-validation (RMSECV), and prediction (RMSEP). Under the optimized conditions, an extraction efficiency higher than 84% was achieved, and the limit of quantification was 1.6 mg g−1. The chloride content obtained in the pre-salt crude oils ranged from 3 to 15 mg g−1, and no differences (ANOVA, 95%) were observed between the results and the reference values by direct solid sampling elemental analysis (DSS-EA) or the ASTM D 6470 standard method. The easy-to-use colorimetric analysis combined with the extraction method’s simplicity offered a high-throughput, low-cost, and environmentally friendly method, with the possibility of portability. Additionally, the decrease in energy consumption and waste generation, increasing the sample throughput and operators’ safety, makes the proposed method a greener approach. Furthermore, the cost savings make this a suitable option for routine quality control, which can be attractive in the crude oil industry. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic showing the apparatus optimized in this work for colorimetric analysis of chloride in crude oil aqueous extracts obtained by a miniaturized sample preparation protocol.</p>
Full article ">Figure 2
<p>Results for chloride in aqueous extracts from crude oils produced by colorimetric analysis with the portable device or by potentiometry (0.7 g of sample, 35 min at 55 °C, with 1 mL of ethyl acetate as the solvent and 5 mL of water as the extraction solution), and reference values produced by DSS-EA (results in mg g<sup>−1</sup>, mean ± standard deviation, n = 3).</p>
Full article ">Figure 3
<p>Scores for the methods for chloride determination in crude oil by AGREEprep analysis [<a href="#B43-processes-12-02425" class="html-bibr">43</a>]. (<b>A</b>) Proposed extraction–colorimetric method, (<b>B</b>) ASTM D 6470 standard method, and (<b>C</b>) DSS-EA.</p>
Full article ">
18 pages, 3127 KiB  
Article
Precise Geoid Determination in the Eastern Swiss Alps Using Geodetic Astronomy and GNSS/Leveling Methods
by Müge Albayrak, Urs Marti, Daniel Willi, Sébastien Guillaume and Ryan A. Hardy
Sensors 2024, 24(21), 7072; https://doi.org/10.3390/s24217072 - 2 Nov 2024
Viewed by 640
Abstract
Astrogeodetic deflections of the vertical (DoVs) are close indicators of the slope of the geoid. Thus, DoVs observed along horizontal profiles may be integrated to create geoid undulation profiles. In this study, we collected DoV data in the Eastern Swiss Alps using a [...] Read more.
Astrogeodetic deflections of the vertical (DoVs) are close indicators of the slope of the geoid. Thus, DoVs observed along horizontal profiles may be integrated to create geoid undulation profiles. In this study, we collected DoV data in the Eastern Swiss Alps using a Swiss Digital Zenith Camera, the COmpact DIgital Astrometric Camera (CODIAC), and two total station-based QDaedalus systems. In the mountainous terrain of the Eastern Swiss Alps, the geoid profile was established at 15 benchmarks over a two-week period in June 2021. The elevation along the profile ranges from 1185 to 1800 m, with benchmark spacing ranging from 0.55 km to 2.10 km. The DoV, gravity, GNSS, and levelling measurements were conducted on these 15 benchmarks. The collected gravity data were primarily used for corrections of the DoV-based geoid profiles, accounting for variations in station height and the geoid-quasigeoid separation. The GNSS/levelling and DoV data were both used to compute geoid heights. These geoid heights are compared with the Swiss Geoid Model 2004 (CHGeo2004) and two global gravity field models (EGM2008 and XGM2019e). Our study demonstrates that absolute geoid heights derived from GNSS/leveling data achieve centimeter-level accuracy, underscoring the precision of this method. Comparisons with CHGeo2004 predictions reveal a strong correlation, closely aligning with both GNSS/leveling and DoV-derived results. Additionally, the differential geoid height analysis highlights localized variations in the geoid surface, further validating the robustness of CHGeo2004 in capturing fine-scale geoid heights. These findings confirm the reliability of both absolute and differential geoid height calculations for precise geoid modeling in complex mountainous terrains. Full article
(This article belongs to the Section State-of-the-Art Sensors Technologies)
Show Figures

Figure 1

Figure 1
<p>Surses region geoid profile in the Eastern Swiss Alps, where DoV, GNSS, leveling, and gravity observations were carried out on 15 benchmarks. The Shuttle Radar Topography Mission (SRTM) is visualized along the profile.</p>
Full article ">Figure 2
<p>The data collection and analysis scheme for this study, including the types of datasets used and the flow of information in the reduction and processing of these data points into geoid profiles.</p>
Full article ">Figure 3
<p>Free-air anomaly (red) versus the orthometric heights (blue).</p>
Full article ">Figure 4
<p>Geoid profile corrections (relative to the first mark) added to the quasigeoid profile derived from observed deflections of the vertical (DoV). These include the dynamic correction derived from observed gravity disturbances and the geoid-quasigeoid separation term.</p>
Full article ">Figure 5
<p>(<b>a</b>) Absolute geoid height results from GNSS/leveling and DoV compared with CHGeo2004, EGM2008, and XGM2019e. (<b>b</b>) The same results relative to the first mark in the profile. (<b>c</b>) The same results minus CHGeo2004.</p>
Full article ">Figure 6
<p>Differential geoid height calculation results from DoV, GNSS/leveling, EGM2008, and XGM2019e. To remove long-wavelength signals, each point shows the difference in geoid height between a point and the previous point in the survey.</p>
Full article ">
Back to TopTop