[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (692)

Search Parameters:
Keywords = camera projection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 33067 KiB  
Review
Geometric Wide-Angle Camera Calibration: A Review and Comparative Study
by Jianzhu Huai, Yuxin Shao, Grzegorz Jozkow, Binliang Wang, Dezhong Chen, Yijia He and Alper Yilmaz
Sensors 2024, 24(20), 6595; https://doi.org/10.3390/s24206595 (registering DOI) - 13 Oct 2024
Viewed by 228
Abstract
Wide-angle cameras are widely used in photogrammetry and autonomous systems which rely on the accurate metric measurements derived from images. To find the geometric relationship between incoming rays and image pixels, geometric camera calibration (GCC) has been actively developed. Aiming to provide practical [...] Read more.
Wide-angle cameras are widely used in photogrammetry and autonomous systems which rely on the accurate metric measurements derived from images. To find the geometric relationship between incoming rays and image pixels, geometric camera calibration (GCC) has been actively developed. Aiming to provide practical calibration guidelines, this work surveys the existing GCC tools and evaluates the representative ones for wide-angle cameras. The survey covers the camera models, calibration targets, and algorithms used in these tools, highlighting their properties and the trends in GCC development. The evaluation compares six target-based GCC tools, namely BabelCalib, Basalt, Camodocal, Kalibr, the MATLAB calibrator, and the OpenCV-based ROS calibrator, with simulated and real data for wide-angle cameras described by four parametric projection models. These tests reveal the strengths and weaknesses of these camera models, as well as the repeatability of these GCC tools. In view of the survey and evaluation, future research directions of wide-angle GCC are also discussed. Full article
(This article belongs to the Special Issue Feature Review Papers in Intelligent Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">
21 pages, 3173 KiB  
Article
Methods for Assessing the Effectiveness of Modern Counter Unmanned Aircraft Systems
by Konrad D. Brewczyński, Marek Życzkowski, Krzysztof Cichulski, Kamil A. Kamiński, Paraskevi Petsioti and Geert De Cubber
Remote Sens. 2024, 16(19), 3714; https://doi.org/10.3390/rs16193714 - 6 Oct 2024
Viewed by 566
Abstract
Given the growing threat posed by the widespread availability of unmanned aircraft systems (UASs), which can be utilised for various unlawful activities, the need for a standardised method to evaluate the effectiveness of systems capable of detecting, tracking, and identifying (DTI) these devices [...] Read more.
Given the growing threat posed by the widespread availability of unmanned aircraft systems (UASs), which can be utilised for various unlawful activities, the need for a standardised method to evaluate the effectiveness of systems capable of detecting, tracking, and identifying (DTI) these devices has become increasingly urgent. This article draws upon research conducted under the European project COURAGEOUS, where 260 existing drone detection systems were analysed, and a methodology was developed for assessing the suitability of C-UASs in relation to specific threat scenarios. The article provides an overview of the most commonly employed technologies in C-UASs, such as radars, visible light cameras, thermal imaging cameras, laser range finders (lidars), and acoustic sensors. It explores the advantages and limitations of each technology, highlighting their reliance on different physical principles, and also briefly touches upon the legal implications associated with their deployment. The article presents the research framework and provides a structural description, alongside the functional and performance requirements, as well as the defined metrics. Furthermore, the methodology for testing the usability and effectiveness of individual C-UAS technologies in addressing specific threat scenarios is elaborated. Lastly, the article offers a concise list of prospective research directions concerning the analysis and evaluation of these technologies. Full article
(This article belongs to the Special Issue Drone Remote Sensing II)
Show Figures

Figure 1

Figure 1
<p>Research framework of the COURAGEOUS project.</p>
Full article ">Figure 2
<p>Structural description of the COURAGEOUS project.</p>
Full article ">Figure 3
<p>Relevant (according to the COURAGEOUS project) C-UAS products.</p>
Full article ">Figure 4
<p>Technologies used for detecting, tracking, and identifying (DTI) in C-UAS solutions.</p>
Full article ">Figure 5
<p>Integration of the C-UAS solution with external control and analysis systems.</p>
Full article ">Figure 6
<p>Combinations of technologies used in C-UAS solutions. The data label contains the percentage share of the given combination of technologies in relation to the C-UASs present in the base and the number of them.</p>
Full article ">Figure 7
<p>C-UAS technology correlation.</p>
Full article ">Figure 8
<p>The use of AI in the detection and identification of drones.</p>
Full article ">Figure 9
<p>A simplified algorithm for assessing the effectiveness of modern C-UASs.</p>
Full article ">
23 pages, 11530 KiB  
Article
Vibrator Rack Pose Estimation for Monitoring the Vibration Quality of Concrete Using Improved YOLOv8-Pose and Vanishing Points
by Bingyu Ren, Xiaofeng Zheng, Tao Guan and Jiajun Wang
Buildings 2024, 14(10), 3174; https://doi.org/10.3390/buildings14103174 - 5 Oct 2024
Viewed by 673
Abstract
Monitoring the actual vibration coverage is critical for preventing over- or under-vibration and ensuring concrete’s strength. However, the current manual methods and sensor techniques fail to meet the requirements of on-site construction. Consequently, this study proposes a novel approach for estimating the pose [...] Read more.
Monitoring the actual vibration coverage is critical for preventing over- or under-vibration and ensuring concrete’s strength. However, the current manual methods and sensor techniques fail to meet the requirements of on-site construction. Consequently, this study proposes a novel approach for estimating the pose of concrete vibrator racks. This method integrates the Linear Spatial Kernel Aggregation (LSKA) module into the You Only Look Once (YOLO) framework to accurately detect the keypoints of the rack and then employs the vanishing point theorem to estimate the rotation angle of the rack without any 3D datasets. The method enables the monitoring of the vibration impact range for each vibrator’s activity and is applicable to various camera positions. Given that measuring the rotation angle of a rack in reality poses is challenging, this study proposes employing a simulation environment to validate both the feasibility and accuracy of the proposed method. The results demonstrate that the improved YOLOv8-Pose achieved a 1.4% increase in accuracy compared with YOLOv8-Pose, and the proposed method monitored the rotation angle with an average error of 6.97° while maintaining a working efficiency of over 35 frames per second. This methodology was successfully implemented at a construction site for a high-arch dam project in China. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

Figure 1
<p>The concrete vibrator and the positions of individual sensors.</p>
Full article ">Figure 2
<p>The framework for estimating the rotation angle of a vibration rack.</p>
Full article ">Figure 3
<p>A vibration rack’s keypoints. Among them, Points 5, 6, 9 and 10 are located on the opposite side of the vibration rack, corresponding, respectively, to Points 4, 3, 8 and 7.</p>
Full article ">Figure 4
<p>Large Separable Kernel Attention.</p>
Full article ">Figure 5
<p>Improved YOLOv8-Pose framework.</p>
Full article ">Figure 6
<p>Vanishing points and vanishing lines in central projection.</p>
Full article ">Figure 7
<p>Logic of rotation angle estimation.</p>
Full article ">Figure 8
<p>Principle of recognizing the rotation angle of the vibration rack: (<b>a</b>) vanishing point diagram; (<b>b</b>) angle solution diagram.</p>
Full article ">Figure 9
<p>Relative positional relationships of the models.</p>
Full article ">Figure 10
<p>Simulation environment.</p>
Full article ">Figure 11
<p>Comparison between the actual vibration rack’s position recorded by the sensor and the vibration rack’s position in the simulation.</p>
Full article ">Figure 12
<p>Experimental data statistics: (<b>a</b>) the first test; (<b>b</b>) the second test; (<b>c</b>) the third test; (<b>d</b>) all the tests.</p>
Full article ">Figure 12 Cont.
<p>Experimental data statistics: (<b>a</b>) the first test; (<b>b</b>) the second test; (<b>c</b>) the third test; (<b>d</b>) all the tests.</p>
Full article ">Figure 13
<p>Vibration rack rotation angle detection process in the simulation: (<b>a</b>) move; (<b>b</b>) stop; (<b>c</b>) insert; (<b>d</b>) vibrate; (<b>e</b>) pull out; (<b>f</b>) move.</p>
Full article ">Figure 14
<p>Enhanced intelligent vibration monitoring system.</p>
Full article ">Figure 15
<p>The construction site.</p>
Full article ">Figure 16
<p>Vibration rack rotation angle detection process in reality: (<b>a</b>) move; (<b>b</b>) stop; (<b>c</b>) insert; (<b>d</b>) vibrate; (<b>e</b>) pull out; (<b>f</b>) move.</p>
Full article ">Figure 17
<p>Angle estimations of different fps.</p>
Full article ">Figure 18
<p>Precision–recall (P-R) curves.</p>
Full article ">
21 pages, 6492 KiB  
Article
An Image-Based Sensor System for Low-Cost Airborne Particle Detection in Citizen Science Air Quality Monitoring
by Syed Mohsin Ali Shah, Diego Casado-Mansilla and Diego López-de-Ipiña
Sensors 2024, 24(19), 6425; https://doi.org/10.3390/s24196425 - 4 Oct 2024
Viewed by 612
Abstract
Air pollution poses significant public health risks, necessitating accurate and efficient monitoring of particulate matter (PM). These organic compounds may be released from natural sources like trees and vegetation, as well as from anthropogenic, or human-made sources including industrial activities and motor vehicle [...] Read more.
Air pollution poses significant public health risks, necessitating accurate and efficient monitoring of particulate matter (PM). These organic compounds may be released from natural sources like trees and vegetation, as well as from anthropogenic, or human-made sources including industrial activities and motor vehicle emissions. Therefore, measuring PM concentrations is paramount to understanding people’s exposure levels to pollutants. This paper introduces a novel image processing technique utilizing photographs/pictures of Do-it-Yourself (DiY) sensors for the detection and quantification of PM10 particles, enhancing community involvement and data collection accuracy in Citizen Science (CS) projects. A synthetic data generation algorithm was developed to overcome the challenge of data scarcity commonly associated with citizen-based data collection to validate the image processing technique. This algorithm generates images by precisely defining parameters such as image resolution, image dimension, and PM airborne particle density. To ensure these synthetic images mimic real-world conditions, variations like Gaussian noise, focus blur, and white balance adjustments and combinations were introduced, simulating the environmental and technical factors affecting image quality in typical smartphone digital cameras. The detection algorithm for PM10 particles demonstrates robust performance across varying levels of noise, maintaining effectiveness in realistic mobile imaging conditions. Therefore, the methodology retains sufficient accuracy, suggesting its practical applicability for environmental monitoring in diverse real-world conditions using mobile devices. Full article
Show Figures

Figure 1

Figure 1
<p>Air meter to monitor dust levels by covering it with a layer of petroleum jelly. Paper covered with a layer of petroleum jelly retains airborne particles which stick to the adhesive material [<a href="#B21-sensors-24-06425" class="html-bibr">21</a>].</p>
Full article ">Figure 2
<p>Bicubic interpolation at position (i’, j’) [<a href="#B38-sensors-24-06425" class="html-bibr">38</a>].</p>
Full article ">Figure 3
<p>Flow diagram of proposed methodology.</p>
Full article ">Figure 4
<p>Image sample (<b>A</b>–<b>C</b>) are samples gathered by [<a href="#B21-sensors-24-06425" class="html-bibr">21</a>] and (<b>E</b>–<b>G</b>) samples are synthetically generated images by proposed method.</p>
Full article ">Figure 5
<p>(<b>A</b>) Once the air meter has been printed, students can proceed to cut it out. (<b>B</b>) The air meter is then affixed to a milk carton using silver duct tape. (<b>C</b>) A fine layer of petroleum jelly is applied to the surface using a finger. (<b>D</b>) Finally, the air meter is mounted outdoors with the help of silver duct tape [<a href="#B21-sensors-24-06425" class="html-bibr">21</a>].</p>
Full article ">Figure 6
<p>Flow diagram of generation of synthetic data samples and addition of noise.</p>
Full article ">Figure 7
<p>Comparison of accuracy of proposed methodology with addition of Gaussian moise for different input mean and variance values.</p>
Full article ">Figure 8
<p>Comparison of accuracy of proposed methodology with addition of focus blur noise for different input blur strength and focus area values.</p>
Full article ">Figure 9
<p>Comparison of accuracy of proposed methodology with addition of white balance noise for different input values.</p>
Full article ">Figure 10
<p>Comparison of accuracy of proposed methodology with addition of Gaussian noise and white balance noise for different input values.</p>
Full article ">Figure 11
<p>Comparison of accuracy of proposed methodology with addition of Gaussian noise, focus blur, and white balance noise for different input values.</p>
Full article ">
14 pages, 5474 KiB  
Article
Assessment of Staining Patterns in Facades Using an Unmanned Aerial Vehicle (UAV) and Infrared Thermography
by João Arthur dos Santos Ferreira, Fernanda Ramos Luiz Carrilho, Jean Augusto Ortiz Alcantara, Camile Gonçalves, Carina Mariane Stolz, Mayara Amario and Assed N. Haddad
Drones 2024, 8(10), 542; https://doi.org/10.3390/drones8100542 - 1 Oct 2024
Viewed by 468
Abstract
The emergence of pathological manifestations on facades persists globally, with recurring failures occurring often due to repeated construction details or design decisions. This study selected a building with a recurring architectural design and evaluated the stain pattern on its facade using a UAV [...] Read more.
The emergence of pathological manifestations on facades persists globally, with recurring failures occurring often due to repeated construction details or design decisions. This study selected a building with a recurring architectural design and evaluated the stain pattern on its facade using a UAV with an infrared thermal camera. The results showed that advanced technology offers a non-invasive and efficient approach for comprehensive inspections, enabling early detection and targeted interventions to preserve architectural assets without requiring ancillary infrastructure or risking workers at height. The precise identification of damage clarified the real causes of the observed pathological manifestations. Capturing the images allowed accurate inspection, revealing hollow and damp spots not visible to the human eye. Novel results highlight patterns in the appearance of dirt on facades, related to water flow that could have been redirected through proper geometric element execution. The presented inspection methodology, staining standards, and construction details can be easily applied to any building, regardless of location. Sills, drip pans, and flashings must have drip cuts, adequate inclination, and projections to prevent building degradation. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the study’s stages.</p>
Full article ">Figure 2
<p>The top view of the building, identifying the solar orientation and setback position.</p>
Full article ">Figure 3
<p>The Building facade views.</p>
Full article ">Figure 4
<p>Digital camera images of the building’s studied facades.</p>
Full article ">Figure 5
<p>The VANT flight plan.</p>
Full article ">Figure 6
<p>Damage Maps.</p>
Full article ">Figure 7
<p>The (<b>a</b>) digital and (<b>b</b>) thermographic images of the staining above the windows.</p>
Full article ">Figure 8
<p>The staining in the sill joins (<b>a</b>) and at the edges (<b>b</b>) of the sill at the facade South coated by ceramic.</p>
Full article ">Figure 9
<p>The staining by dirt accumulation in the facade’s drip pans: (<b>a</b>) east facade; (<b>b</b>) west setback.</p>
Full article ">Figure 10
<p>The dirt on the top of the building, specifically on the flashings and copings of the walls: (<b>a</b>) digital image; (<b>b</b>) thermographic image.</p>
Full article ">Figure 11
<p>Windowsill recommendation: (<b>a</b>) installed in the window; (<b>b</b>) sill in section.</p>
Full article ">Figure 12
<p>The caps with drips for parapets, walls, and low walls.</p>
Full article ">
17 pages, 21943 KiB  
Article
Evaluation of Direct Sunlight Availability Using a 360° Camera
by Diogo Chambel Lopes and Isabel Nogueira
Solar 2024, 4(4), 555-571; https://doi.org/10.3390/solar4040026 - 1 Oct 2024
Viewed by 538
Abstract
One important aspect to consider when buying a house or apartment is adequate solar exposure. The same applies to the evaluation of the shadowing effects of existing buildings on prospective construction sites and vice versa. In different climates and seasons, it is not [...] Read more.
One important aspect to consider when buying a house or apartment is adequate solar exposure. The same applies to the evaluation of the shadowing effects of existing buildings on prospective construction sites and vice versa. In different climates and seasons, it is not always easy to assess if there will be an excess or lack of sunlight, and both can lead to discomfort and excessive energy consumption. The aim of our project is to design a method to quantify the availability of direct sunlight to answer these questions. We developed a tool in Octave to calculate representative parameters, such as sunlight hours per day over a year and the times of day for which sunlight is present, considering the surrounding objects. The apparent sun position over time is obtained from an existing algorithm and the surrounding objects are surveyed using a picture taken with a 360° camera from a window or other sunlight entry area. The sky regions in the picture are detected and all other regions correspond to obstructions to direct sunlight. The sky detection is not fully automatic, but the sky swap tool in the camera software could be adapted by the manufacturer for this purpose. We present the results for six representative test cases. Full article
Show Figures

Figure 1

Figure 1
<p>Example of a 360° image (rotated 90°).</p>
Full article ">Figure 2
<p>Camera sitting on a leveled structure attached to a windowsill.</p>
Full article ">Figure 3
<p>Coordinates transformation from (Az′, El) to (α, β).</p>
Full article ">Figure 4
<p>Transformation from (α, β) to pixel coordinates (i, j).</p>
Full article ">Figure 5
<p>Case 1, orientation = −2.1° (north): (<b>a</b>) 360° picture taken from the window; (<b>b</b>) Boolean image where white corresponds to sky and all other regions appear in black.</p>
Full article ">Figure 6
<p>Case 1, times when the sun was visible. Black: not visible; dark gray: all directions, light gray: −90° ≤ Az′ ≤ 90°, white: considering obstacles.</p>
Full article ">Figure 7
<p>Case 1, sunlight hours per day. Dashed line: all directions, dash-dotted line: −90° ≤ Az′ ≤ 90°, solid line: considering obstacles.</p>
Full article ">Figure 8
<p>Case 1, sunlight availability percentage. Solid line: daily ratio of available sunlight with obstacles to without obstacles, dashed line: mean value over of a year.</p>
Full article ">Figure 9
<p>Case 2, orientation = 75.5° (northeast): (<b>a</b>) 360° picture taken from the window; (<b>b</b>) Boolean image where white corresponds to sky and all other regions appear in black.</p>
Full article ">Figure 10
<p>Case 2, times when the sun was visible. Black: not visible; dark gray: all directions, light gray: −90° ≤ Az′ ≤ 90°, white: considering obstacles.</p>
Full article ">Figure 11
<p>Case 2, sunlight hours per day. Dashed line: all directions, dash-dotted line: −90° ≤ Az′ ≤ 90°, solid line: considering obstacles.</p>
Full article ">Figure 12
<p>Case 2, sunlight availability percentage. Solid line: daily ratio of available sunlight with obstacles to without obstacles, dashed line: mean value over of a year.</p>
Full article ">Figure 13
<p>Case 3, orientation = 146.6° (southeast): (<b>a</b>) 360° picture taken from the window; (<b>b</b>) Boolean image where white corresponds to sky and all other regions appear in black.</p>
Full article ">Figure 14
<p>Case 3, times when the sun was visible. Black: not visible; dark gray: all directions, light gray: −90° ≤ Az′ ≤ 90°, white: considering obstacles.</p>
Full article ">Figure 15
<p>Case 3, sunlight hours per day. Dashed line: all directions, dash-dotted line: −90° ≤ Az′ ≤ 90°, solid line: considering obstacles.</p>
Full article ">Figure 16
<p>Case 3, sunlight availability percentage. Solid line: daily ratio of available sunlight with obstacles to without obstacles, dashed line: mean value over of a year.</p>
Full article ">Figure 17
<p>Case 4, orientation = 177.5° (south): (<b>a</b>) 360° picture taken from the window; (<b>b</b>) Boolean image where white corresponds to sky and all other regions appear in black.</p>
Full article ">Figure 18
<p>Case 4, times when the sun was visible. Black: not visible; dark gray: all directions, light gray: −90° ≤ Az′ ≤ 90°, white: considering obstacles.</p>
Full article ">Figure 19
<p>Case 4, sunlight hours per day. Dashed line: all directions, dash-dotted line: −90° ≤ Az′ ≤ 90°, solid line: considering obstacles.</p>
Full article ">Figure 20
<p>Case 4, sunlight availability percentage. Solid line: daily ratio of available sunlight with obstacles to without obstacles, dashed line: mean value over of a year.</p>
Full article ">Figure 21
<p>Case 5, orientation = 255.5° (southwest): (<b>a</b>) 360° picture taken from the window; (<b>b</b>) Boolean image where white corresponds to sky and all other regions appear in black.</p>
Full article ">Figure 22
<p>Case 5, times when the sun was visible. Black: not visible; dark gray: all directions, light gray: −90° ≤ Az′ ≤ 90°, white: considering obstacles.</p>
Full article ">Figure 23
<p>Case 5, sunlight hours per day. Dashed line: all directions, dash-dotted line: −90° ≤ Az′ ≤ 90°, solid line: considering obstacles.</p>
Full article ">Figure 24
<p>Case 5, sunlight availability percentage. Solid line: daily ratio of available sunlight with obstacles to without obstacles, dashed line: mean value over of a year.</p>
Full article ">Figure 25
<p>Case 6, orientation = 326.6° (northwest): (<b>a</b>) 360° picture taken from the window; (<b>b</b>) Boolean image where white corresponds to sky and all other regions appear in black.</p>
Full article ">Figure 26
<p>Case 6, times when the sun was visible. Black: not visible; dark gray: all directions, light gray: −90° ≤ Az′ ≤ 90°, white: considering obstacles.</p>
Full article ">Figure 27
<p>Case 6, sunlight hours per day. Dashed line: all directions, dash-dotted line: −90° ≤ Az′ ≤ 90°, solid line: considering obstacles.</p>
Full article ">Figure 28
<p>Case 6, sunlight availability percentage. Solid line: daily ratio of available sunlight with obstacles to without obstacles, dashed line: mean value over of a year.</p>
Full article ">Figure 29
<p>Direct sunlight hours for different orientations in the region of Lisbon.</p>
Full article ">
11 pages, 2269 KiB  
Article
FBG Interrogator Using a Dispersive Waveguide Chip and a CMOS Camera
by Zhenming Ding, Qing Chang, Zeyu Deng, Shijie Ke, Xinhong Jiang and Ziyang Zhang
Micromachines 2024, 15(10), 1206; https://doi.org/10.3390/mi15101206 - 29 Sep 2024
Viewed by 510
Abstract
Optical sensors using fiber Bragg gratings (FBGs) have become an alternative to traditional electronic sensors thanks to their immunity against electromagnetic interference, their applicability in harsh environments, and other advantages. However, the complexity and high cost of the FBG interrogation systems pose a [...] Read more.
Optical sensors using fiber Bragg gratings (FBGs) have become an alternative to traditional electronic sensors thanks to their immunity against electromagnetic interference, their applicability in harsh environments, and other advantages. However, the complexity and high cost of the FBG interrogation systems pose a challenge for the wide deployment of such sensors. Herein, we present a clean and cost-effective method for interrogating an FBG temperature sensor using a micro-chip called the waveguide spectral lens (WSL) and a standard CMOS camera. This interrogation system can project the FBG transmission spectrum onto the camera without any free-space optical components. Based on this system, an FBG temperature sensor is developed, and the results show good agreement with a commercial optical spectrum analyzer (OSA), with the respective wavelength-temperature sensitivity measured as 6.33 pm/°C for the WSL camera system and 6.32 pm/°C for the commercial OSA. Direct data processing on the WSL camera system translates this sensitivity to 0.44 μm/°C in relation to the absolute spatial shift of the FBG spectra on the camera. Furthermore, a deep neural network is developed to train the spectral dataset, achieving a temperature resolution of 0.1 °C from 60 °C to 120 °C, while direct processing on the valley/dark line detection yields a resolution of 7.84 °C. The proposed hardware and the data processing method may lead to the development of a compact, practical, and low-cost FBG interrogator. Full article
(This article belongs to the Special Issue Fiber Optic Sensing Technology: From Materials to Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the FBG interrogation system based on a WSL. SLED: superluminescent light emitting diode; ISO: isolator; FBG: fiber Bragg grating; WSL: waveguide spectral lens; CMOS: complementary metal oxide semiconductor.</p>
Full article ">Figure 2
<p>Schematic of the WSL. Inset (I) shows the cross section of the waveguide. Inset (II) shows the waveguides with output tapers designed for reducing the free-space diffraction loss. BBA: beam broadening area.</p>
Full article ">Figure 3
<p>(<b>a</b>) Photo of the fabricated devices and a matchstick for size comparison. Bottom WSL is used for the FBG interrogation. Wavelength calibration of WSL: (<b>b</b>) captured spectral lines at 779.092 nm, 802.528 nm, and 824.616 nm, respectively. (<b>c</b>) Normalized intensity distributions of the spectral lines. (<b>d</b>) Wavelength calibration result.</p>
Full article ">Figure 4
<p>(<b>a</b>) Comparison between Δλ<sub>B</sub> of the sensing FBG obtained from the OSA and from the WSL-based interrogator. (<b>b</b>) The relation between the spectral line shift and temperature.</p>
Full article ">Figure 5
<p>(<b>a</b>) Image preprocessing and architecture of the neural network. (<b>b</b>) The loss variation in DNN training process. MSE: mean squared error. (<b>c</b>) Scatter plot of actual temperature vs. temperature predicted by DNN.</p>
Full article ">
27 pages, 33174 KiB  
Article
Automated Windrow Profiling System in Mechanized Peanut Harvesting
by Alexandre Padilha Senni, Mario Luiz Tronco, Emerson Carlos Pedrino and Rouverson Pereira da Silva
AgriEngineering 2024, 6(4), 3511-3537; https://doi.org/10.3390/agriengineering6040200 - 25 Sep 2024
Viewed by 413
Abstract
In peanut cultivation, the fact that the fruits develop underground presents significant challenges for mechanized harvesting, leading to high loss rates, with values that can exceed 30% of the total production. Since the harvest is conducted indirectly in two stages, losses are higher [...] Read more.
In peanut cultivation, the fact that the fruits develop underground presents significant challenges for mechanized harvesting, leading to high loss rates, with values that can exceed 30% of the total production. Since the harvest is conducted indirectly in two stages, losses are higher during the digging/inverter stage than the collection stage. During the digging process, losses account for about 60% to 70% of total losses, and this operation directly influences the losses during the collection stage. Experimental studies in production fields indicate a strong correlation between losses and the height of the windrow formed after the digging/inversion process, with a positive correlation coefficient of 98.4%. In response to this high correlation, this article presents a system for estimating the windrow profile during mechanized peanut harvesting, allowing for the measurement of crucial characteristics such as the height, width and shape of the windrow, among others. The device uses an infrared laser beam projected onto the ground. The laser projection is captured by a camera strategically positioned above the analyzed area, and through advanced image processing techniques using triangulation, it is possible to measure the windrow profile at sampled points during a real experiment under direct sunlight. The technical literature does not mention any system with these specific characteristics utilizing the techniques described in this article. A comparison between the results obtained with the proposed system and those obtained with a manual profilometer showed a root mean square error of only 28 mm. The proposed system demonstrates significantly greater precision and operates without direct contact with the soil, making it suitable for dynamic implementation in a control mesh for a digging/inversion device in mechanized peanut harvesting and, with minimal adaptations, in other crops, such as beans and potatoes. Full article
Show Figures

Figure 1

Figure 1
<p>Equipment used for peanut digging, i.e., digger–inverter. (<b>a</b>) Schematic illustration. (<b>b</b>) Commercial equipment.</p>
Full article ">Figure 2
<p>Operation of the digger–inverter, forming the windrow [<a href="#B33-agriengineering-06-00200" class="html-bibr">33</a>].</p>
Full article ">Figure 3
<p>Collecting losses during digging: (<b>a</b>) visible losses; (<b>b</b>) invisible losses ([<a href="#B18-agriengineering-06-00200" class="html-bibr">18</a>]).</p>
Full article ">Figure 4
<p>Manual measurement of the peanut windrow: (<b>a</b>) width of the windrow; (<b>b</b>) height of the windrow; (<b>c</b>) profilometer measurement [<a href="#B35-agriengineering-06-00200" class="html-bibr">35</a>].</p>
Full article ">Figure 5
<p>Thematic map of peanut crop windrow heights [<a href="#B27-agriengineering-06-00200" class="html-bibr">27</a>].</p>
Full article ">Figure 6
<p>Thematic map of visible losses in peanut harvesting [<a href="#B27-agriengineering-06-00200" class="html-bibr">27</a>].</p>
Full article ">Figure 7
<p>Block diagram of the automated peanut bed profiling system.</p>
Full article ">Figure 8
<p>Image subtraction process.</p>
Full article ">Figure 9
<p>Image acquisition sequences.</p>
Full article ">Figure 10
<p>Average pixel intensity calculated for each column of the image.</p>
Full article ">Figure 11
<p>Process of emphasizing horizontal edges using Prewitt filter (<b>top</b> figure) and application of averaging filter (<b>bottom</b> figure).</p>
Full article ">Figure 12
<p>Thresholding process.</p>
Full article ">Figure 13
<p>Image used as a mask obtained after the erosion and dilation operations (<b>top</b> image). Result of adaptive beam segmentation (<b>middle</b> image). Result of adaptive beam segmentation after applying the mask (<b>bottom</b> image).</p>
Full article ">Figure 14
<p>Improvement through morphological operations.</p>
Full article ">Figure 15
<p>Thinning using the traditional method.</p>
Full article ">Figure 16
<p>Proposed thinning technique.</p>
Full article ">Figure 17
<p>Strategy for representing the centroid of connected elements (<b>top</b> of figure). Result of the proposed thinning process (<b>lower</b> part of the figure).</p>
Full article ">Figure 18
<p>Process of identifying connected elements (<b>upper</b> part of figure). Length of connected element (<b>bottom</b> of figure).</p>
Full article ">Figure 19
<p>Process of eliminating overlapping pixels (<b>upper</b> part of the image). Process of eliminating outliers (<b>lower</b> part of the image).</p>
Full article ">Figure 20
<p>Dimensionality transformation. The upper part shows the image of the beam. At the bottom, the coordinates of the beam points after dimensionality transformation are shown.</p>
Full article ">Figure 21
<p>Interpolation process.</p>
Full article ">Figure 22
<p>Filtering process for final removal of outliers.</p>
Full article ">Figure 23
<p>Triangulation model.</p>
Full article ">Figure 24
<p>Windrow used in the experiment.</p>
Full article ">Figure 25
<p>Experimental platform.</p>
Full article ">Figure 26
<p>Profilometer.</p>
Full article ">Figure 27
<p>Photo of the profilometer, image of the ground with the laser on, image of the ground with the laser off and the image resulting from the difference between the two.</p>
Full article ">Figure 28
<p>Image resulting from the acquisition process and segmented image.</p>
Full article ">Figure 29
<p>Image resulting from the beam detection process.</p>
Full article ">Figure 30
<p>Image resulting from the beam detection process.</p>
Full article ">Figure 31
<p>Image resulting from the beam detection process.</p>
Full article ">Figure 32
<p>Result of the reconstruction of the windrow profile using the profilometer measurements.</p>
Full article ">Figure 33
<p>Result of the reconstruction of the windrow using the proposed sensor.</p>
Full article ">Figure 34
<p>Dynamic adjustment of the digging mechanism according to the height of the windrow obtained by the proposed sensor.</p>
Full article ">Figure 35
<p>Expected losses. Comparison between the results obtained with the profilometer and the proposed sensor.</p>
Full article ">
23 pages, 8683 KiB  
Article
MicroGravity Explorer Kit (MGX): An Open-Source Platform for Accessible Space Science Experiments
by Waldenê de Melo Moura, Carlos Renato dos Santos, Moisés José dos Santos Freitas, Adriano Costa Pinto, Luciana Pereira Simões and Alison Moraes
Aerospace 2024, 11(10), 790; https://doi.org/10.3390/aerospace11100790 - 25 Sep 2024
Viewed by 683
Abstract
The study of microgravity, a condition in which an object experiences near-zero weight, is a critical area of research with far-reaching implications for various scientific disciplines. Microgravity allows scientists to investigate fundamental physical phenomena influenced by Earth’s gravitational forces, opening up new possibilities [...] Read more.
The study of microgravity, a condition in which an object experiences near-zero weight, is a critical area of research with far-reaching implications for various scientific disciplines. Microgravity allows scientists to investigate fundamental physical phenomena influenced by Earth’s gravitational forces, opening up new possibilities in fields such as materials science, fluid dynamics, and biology. However, the complexity and cost of developing and conducting microgravity missions have historically limited the field to well-funded space agencies, universities with dedicated government funding, and large research institutions, creating a significant barrier to entry. This paper presents the MicroGravity Explorer Kit’s (MGX) design, a multifunctional platform for conducting microgravity experiments aboard suborbital rocket flights. The MGX aims to democratize access to microgravity research, making it accessible to high school students, undergraduates, and researchers. To ensure that the tool is versatile across different scenarios, the authors conducted a comprehensive literature review on microgravity experiments, and specific requirements for the MGX were established. The MGX is designed as an open-source platform that supports various experiments, reducing costs and accelerating development. The multipurpose experiment consists of a Jetson Nano computer with multiple sensors, such as inertial sensors, temperature and pressure, and two cameras with up to 4k resolution. The project also presents examples of codes for data acquisition and compression and the ability to process images and run machine learning algorithms to interpret results. The MGX seeks to promote greater participation and innovation in space sciences by simplifying the process and reducing barriers to entry. The design of a platform that can democratize access to space and research related to space sciences has the potential to lead to groundbreaking discoveries and advancements in materials science, fluid dynamics, and biology, with significant practical applications such as more efficient propulsion systems and novel materials with unique properties. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

Figure 1
<p>Examples of microgravity experiments. Left panel: Illustration of the spring-mass system educational experiment. Middle panel: Furnace for investigating the influence of gravity on the solidification of a Pb-Sn eutectic alloy. Right panel: Sample ampoules before solidification in the microgravity flight.</p>
Full article ">Figure 2
<p>Means of access to the microgravity environment.</p>
Full article ">Figure 3
<p>View of the Bremen Drop Tower facility at the University of Bremen, Germany.</p>
Full article ">Figure 4
<p>Typical parabolic flight profile.</p>
Full article ">Figure 5
<p>External view of the International Space Station (<b>left</b> panel). An experiment was conducted in a microgravity environment on the ISS at the horticultural lighting project (<b>right</b> panel).</p>
Full article ">Figure 6
<p>Illustration of a Rocket-Based Suborbital Flight Profile for Performing Microgravity Experiments.</p>
Full article ">Figure 7
<p>Diagram of the microgravity experiment, the rocket, and the launch center.</p>
Full article ">Figure 8
<p>Illustration of the MGX architecture divided into two parts. On the left-hand side is the electronic processing module with dimensions of 1U. On the right-hand side is an example of the experimental module, which can be customized in size according to specific research demands, containing sensor instrumentation and cameras.</p>
Full article ">Figure 9
<p>Hardware schematics of the electronic processing module.</p>
Full article ">Figure 10
<p>Illustration of electronic sensor capabilities inside the experiment module, including I2C, analog sensors, UART, and 4K camera inputs, besides actuators.</p>
Full article ">Figure 11
<p>Diagram of U-Net architecture illustrating the process of segmenting cells in biological tissue images.</p>
Full article ">
15 pages, 4976 KiB  
Article
Augmented Reality Interface for Adverse-Visibility Conditions Validated by First Responders in Rescue Training Scenarios
by Xabier Oregui, Anaida Fernández García, Izar Azpiroz, Blanca Larraga-García, Verónica Ruiz, Igor García Olaizola and Álvaro Gutiérrez
Electronics 2024, 13(18), 3739; https://doi.org/10.3390/electronics13183739 - 20 Sep 2024
Viewed by 427
Abstract
Updating the equipment of the first responder (FR) by providing them with new capabilities and useful information will inevitably lead to better mission success rates and, therefore, more lives saved. This paper describes the design and implementation of a modular interface for augmented [...] Read more.
Updating the equipment of the first responder (FR) by providing them with new capabilities and useful information will inevitably lead to better mission success rates and, therefore, more lives saved. This paper describes the design and implementation of a modular interface for augmented reality displays integrated into standard FR equipment that will provide support during the adverse-visibility situations that the rescuers find during their missions. This interface includes assistance based on the machine learning module denoted as Robust Vision Module, which detects relevant objects in a rescue scenario, particularly victims, using the feed from a thermal camera. This feed can be displayed directly alongside the detected objects, helping FRs to avoid missing anything during their operations. Additionally, the information exposition in the interface is organized according to the biometrical parameters of FRs during the operations. The main novelty of the project is its orientation towards useful solutions for FRs focusing on something occasionally ignored during research projects: the point of view of the final user. The functionalities have been designed after multiple iterations between researchers and FRs, involving testing and evaluation through realistic situations in training scenarios. Thanks to this feedback, the overall satisfaction according to the evaluations of 18 FRs is 3.84 out of 5 for the Robust Vision Module and 3.99 out of 5 for the complete AR interface. These functionalities and the different display modes available for the FRs to adapt to each situation are detailed in this paper. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of the solution for the command center and the FR team, based on [<a href="#B8-electronics-13-03739" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>Smart Helmet prototype developed during the H2020 European RESCUER [<a href="#B27-electronics-13-03739" class="html-bibr">27</a>] project.</p>
Full article ">Figure 3
<p>AR Sensor Data View example where all tools icons are displayed with their corresponding value and color according to the <a href="#electronics-13-03739-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 4
<p>Thermal camera feed view as additional background for the AR Sensor Data View information.</p>
Full article ">Figure 5
<p>Live Situation Map View example.</p>
Full article ">
17 pages, 5949 KiB  
Article
Influence of Camera Placement on UGV Teleoperation Efficiency in Complex Terrain
by Karol Cieślik, Piotr Krogul, Tomasz Muszyński, Mirosław Przybysz, Arkadiusz Rubiec and Rafał Kamil Typiak
Appl. Sci. 2024, 14(18), 8297; https://doi.org/10.3390/app14188297 - 14 Sep 2024
Viewed by 467
Abstract
Many fields, where human health and life are at risk, are increasingly utilizing mobile robots and UGVs (Unmanned Ground Vehicles). They typically operate in teleoperation mode (control based on the projected image, outside the operator’s direct field of view), as autonomy is not [...] Read more.
Many fields, where human health and life are at risk, are increasingly utilizing mobile robots and UGVs (Unmanned Ground Vehicles). They typically operate in teleoperation mode (control based on the projected image, outside the operator’s direct field of view), as autonomy is not yet sufficiently developed and key decisions should be made by the man. Fast and effective decision making requires a high level of situational and action awareness. It relies primarily on visualizing the robot’s surroundings and end effectors using cameras and displays. This study aims to compare the effectiveness of three solutions of robot area imaging systems using the simultaneous transmission of images from three cameras while driving a UGV in complex terrain. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

Figure 1
<p>Three tested configurations of camera settings on UGV and their FOV: (<b>a</b>) configuration 1; (<b>b</b>) configuration 2; (<b>c</b>) configuration 3.</p>
Full article ">Figure 2
<p>Test tracks of (<b>a</b>) trial 1; (<b>b</b>) trial 2.</p>
Full article ">Figure 3
<p>Example view from research: (<b>a</b>) trial 1; (<b>b</b>) trial 2.</p>
Full article ">Figure 4
<p>Experimental stand: (<b>a</b>) controlled UGV; (<b>b</b>) operator station.</p>
Full article ">Figure 5
<p>Driving time for 3 camera configurations and driving with direct observation of the surrounding—trial 1.</p>
Full article ">Figure 6
<p>Reaching the obstacle time for 3 camera configurations and driving with direct observation of the surrounding—trial 1—task 1.</p>
Full article ">Figure 7
<p>Cornering time for 3 camera configurations and driving with direct observation of the surrounding—trial 1—task 2.</p>
Full article ">Figure 8
<p>Departure time to the open space for 3 camera configurations and driving with direct observation of the surrounding—trial 1—task 3.</p>
Full article ">Figure 9
<p>Number of lane violations per ride for 3 camera configurations—trial 1.</p>
Full article ">Figure 10
<p>Average travel time for 3 camera configurations and driving with direct observation of the surrounding—trial 2.</p>
Full article ">Figure 11
<p>Average of number of undetected and unrecognized obstacles per ride for 3 camera configurations—trial 2.</p>
Full article ">Figure 12
<p>Average of number of obstacles and lane violations per ride for 3 camera configurations—trial 2.</p>
Full article ">
25 pages, 4182 KiB  
Article
W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots
by Dingji Luo, Yucan Huang, Xuchao Huang, Mingda Miao and Xueshan Gao
Sensors 2024, 24(17), 5662; https://doi.org/10.3390/s24175662 - 30 Aug 2024
Viewed by 550
Abstract
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of [...] Read more.
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of mobile robots in indoor environments, we propose a visual SLAM perception method that integrates wheel odometry information. First, the robot’s body pose is parameterized in SE(2) and the corresponding camera pose is parameterized in SE(3). On this basis, we derive the visual constraint residuals and their Jacobian matrices for reprojection observations using the camera projection model. We employ the concept of pre-integration to derive pose-constraint residuals and their Jacobian matrices and utilize marginalization theory to derive the relative pose residuals and their Jacobians for loop closure constraints. This approach solves the nonlinear optimization problem to obtain the optimal pose and landmark points of the ground-moving robot. A comparison with the ORBSLAM3 algorithm reveals that, in the recorded indoor environment datasets, the proposed algorithm demonstrates significantly higher perception accuracy, with root mean square error (RMSE) improvements of 89.2% in translation and 98.5% in rotation for absolute trajectory error (ATE). The overall trajectory localization accuracy ranges between 5 and 17 cm, validating the effectiveness of the proposed algorithm. These findings can be applied to preliminary mapping for the autonomous navigation of indoor mobile robots and serve as a basis for path planning based on the mapping results. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of mobile robot visual SLAM with integrated wheel speed.</p>
Full article ">Figure 2
<p>Schematic diagram of mobile robot coordinate system.</p>
Full article ">Figure 3
<p>Camera projection model.</p>
Full article ">Figure 4
<p>Schematic diagram of wheel speed information pre-integration.</p>
Full article ">Figure 5
<p>Schematic diagram of square movement in the comparative experiment.</p>
Full article ">Figure 6
<p>Reference keyframe and current frame in previous tracking, (<b>a</b>) reference keyframe of previous tracking and (<b>b</b>) current frame of previous tracking.</p>
Full article ">Figure 7
<p>Comparative Experiment One: robot poses and environmental map points obtained by W-VSLAM.</p>
Full article ">Figure 8
<p>Comparative Experiment Two: robot poses and environmental map points obtained by W-VSLAM.</p>
Full article ">Figure 9
<p>Comparative Experiment One: trajectory comparison chart of different algorithms.</p>
Full article ">Figure 10
<p>Comparative Experiment Two: trajectory comparison chart of different algorithms.</p>
Full article ">Figure 11
<p>Comparative test one: translational component comparison chart of trajectories from different algorithms.</p>
Full article ">Figure 12
<p>(<b>a</b>) Perception results of robot poses and map points in Experiment One. (<b>b</b>) Comparison between Experiment One and reference trajectory.</p>
Full article ">Figure 13
<p>(<b>a</b>) Perception results of robot poses and map points in Experiment Two. (<b>b</b>) Comparison between Experiment Two and reference trajectory.</p>
Full article ">Figure 14
<p>(<b>a</b>) Perception results of robot poses and map points in Experiment Three. (<b>b</b>) Comparison between Experiment Three and reference trajectory.</p>
Full article ">Figure 15
<p>Indoor long corridor environment trajectory; Rivz result diagram.</p>
Full article ">Figure 16
<p>Comparison chart of trajectories in the indoor long corridor environment.</p>
Full article ">Figure 17
<p>Comparison chart of translational components of trajectories in the indoor long corridor environment.</p>
Full article ">Figure 18
<p>Absolute accuracy of rotational estimation in the indoor long corridor environment.</p>
Full article ">Figure 19
<p>Relative accuracy of rotational estimation in the indoor long corridor environment (with a 1° increment).</p>
Full article ">
16 pages, 5065 KiB  
Article
A Three-Dimensional Reconstruction Method Based on Telecentric Epipolar Constraints
by Qinsong Li, Zhendong Ge, Xin Yang and Xianwei Zhu
Photonics 2024, 11(9), 804; https://doi.org/10.3390/photonics11090804 - 28 Aug 2024
Viewed by 458
Abstract
When calibrating a microscopic fringe projection profile system with a telecentric camera, the orthogonality of the camera causes an ambiguity in the positive and negative signs of its external parameters. A common solution is to introduce additional constraints, which often increase the level [...] Read more.
When calibrating a microscopic fringe projection profile system with a telecentric camera, the orthogonality of the camera causes an ambiguity in the positive and negative signs of its external parameters. A common solution is to introduce additional constraints, which often increase the level of complexity and the calibration cost. Another solution is to abandon the internal/external parameter models derived from the physical imaging process and obtain a numerically optimal projection matrix through the least squares solution. This paper proposes a novel calibration method, which derives a telecentric epipolar constraint model from the conventional epipolar constraint relationship and uses this constraint relationship to complete the stereo calibration of the system. On the one hand, since only the camera’s intrinsic parameters are needed, there is no need to introduce additional constraints. On the other hand, the solution is optimized based on the full consideration of the imaging model to make the parameters confirm to the physical model. Our experiments proved the feasibility and accuracy of the method. Full article
Show Figures

Figure 1

Figure 1
<p>Epipolar constraint in the pinhole model.</p>
Full article ">Figure 2
<p>Epipolar constraint under telecentric model.</p>
Full article ">Figure 3
<p>Experiment platform.</p>
Full article ">Figure 4
<p>Gray code stripes (<b>left</b>) and phase-shift stripes (<b>right</b>).</p>
Full article ">Figure 5
<p>Reconstruction of point cloud pseudo−colored models for plane and sphere based on telecentric epipolar constraints.</p>
Full article ">Figure 6
<p>The pseudo−colored point cloud models of the plane and sphere reconstruction using projection matrix.</p>
Full article ">Figure 7
<p>The position of the standard sphere in the camera’s field of view.</p>
Full article ">
12 pages, 5080 KiB  
Article
The Selection of Lettuce Seedlings for Transplanting in a Plant Factory by a Non-Destructive Estimation of Leaf Area and Fresh Weight
by Jaeho Jeong, Yoomin Ha and Yurina Kwack
Horticulturae 2024, 10(9), 919; https://doi.org/10.3390/horticulturae10090919 - 28 Aug 2024
Viewed by 658
Abstract
Selecting uniform and healthy seedlings is important to ensure that a certain level of production can be reliably achieved in a plant factory. The objectives of this study were to investigate the potential of non-destructive image analysis for predicting the leaf area and [...] Read more.
Selecting uniform and healthy seedlings is important to ensure that a certain level of production can be reliably achieved in a plant factory. The objectives of this study were to investigate the potential of non-destructive image analysis for predicting the leaf area and shoot fresh weight of lettuce and to determine the feasibility of using a simple image analysis to select robust seedlings that can produce a uniform and dependable yield of lettuce in a plant factory. To vary the range of the leaf area and shoot fresh weight of lettuce seedlings, we applied two- and three-day irrigation intervals during the period of seedling production and calculated the projected canopy size (PCS) from the top-view images of the lettuce seedlings, although there were no significant growth differences between the irrigation regimes. A high correlation was identified between the PCS and shoot fresh weight for the lettuce seedlings during the period of seedling production, with a coefficient of determination exceeding 0.8. Therefore, the lettuce seedlings were classified into four grades (A–D) based on their PCS values calculated at transplanting. In the early stages of cultivation after transplanting, there were differences in the lettuce growth among the four grades; however, at the harvest (28 days after transplanting), there was no significant difference in the lettuce yield between grades A–C, with the exception of grade D. The lettuce seedlings in grades A–C exhibited the anticipated yield (150 g/plant) at the harvest time. In the correlation between the PCS and leaf area or the shoot fresh weight of lettuce during the cultivation period after transplanting and the entire cultivation period, the R2 values were higher than 0.9, confirming that PCS can be used to predict lettuce growth with greater accuracy. In conclusion, we demonstrated that the PCS calculation from the top-view images, a straightforward image analysis technique, can be employed to non-destructively and accurately predict lettuce leaf area and shoot fresh weight, and the seedlings with the potential to yield above a certain level after transplanting can be objectively and accurately selected based on PCS. Full article
(This article belongs to the Special Issue Indoor Farming and Artificial Cultivation)
Show Figures

Figure 1

Figure 1
<p>Spectral distribution of LEDs used in this study.</p>
Full article ">Figure 2
<p>PIMS (Plant Image Measurement System) used for non-destructive estimation of leaf area and fresh weight of ‘Frillice’ lettuce seedlings in a plant factory.</p>
Full article ">Figure 3
<p>Images of ‘Frillice’ lettuce seedlings and mature plants taken with PIMS (<b>A</b>,<b>B</b>), separated from their background (<b>C</b>,<b>D</b>), and detected individually using ENVI program (<b>E</b>,<b>F</b>).</p>
Full article ">Figure 4
<p>Changes in leaf area (<b>A</b>) and shoot fresh weight (<b>B</b>) of ‘Frillice’ lettuce seedlings during the period of seedling production in a plant factory. The seedlings were sub-irrigated every two (2DI) or three (3DI) days and sampled for 12 DAS.</p>
Full article ">Figure 5
<p>Distribution of leaf area (<b>A</b>) and shoot fresh weight (<b>B</b>) of ‘Frillice’ lettuce seedlings cultivated under different irrigation regimes at 12 DAS.</p>
Full article ">Figure 6
<p>Correlation between PCS and leaf area (<b>A</b>) and shoot fresh weight (<b>B</b>) of ‘Frillice’ lettuce seedlings.</p>
Full article ">Figure 7
<p>“Frillice” lettuce seedlings in four different grades ((<b>A</b>) grade A, (<b>B</b>) grade B, (<b>C</b>) grade C, (<b>D</b>) grade D).</p>
Full article ">Figure 8
<p>“Frillice” lettuce at harvest (28 DAT in a hydroponic system) according to seedling grades ((<b>A</b>) grade A, (<b>B</b>) grade B, (<b>C</b>) grade C, (<b>D</b>) grade D).</p>
Full article ">Figure 9
<p>Correlation between PCS and leaf area (<b>A</b>) and shoot fresh weight (<b>B</b>) throughout lettuce cultivation after transplanting.</p>
Full article ">Figure 10
<p>Correlation between PCS and leaf area (<b>A</b>) and shoot fresh weight (<b>B</b>) throughout the entire lettuce production period.</p>
Full article ">
25 pages, 19272 KiB  
Article
6DoF Object Pose and Focal Length Estimation from Single RGB Images in Uncontrolled Environments
by Mayura Manawadu and Soon-Yong Park
Sensors 2024, 24(17), 5474; https://doi.org/10.3390/s24175474 - 23 Aug 2024
Viewed by 883
Abstract
Accurate 6DoF (degrees of freedom) pose and focal length estimation are important in extended reality (XR) applications, enabling precise object alignment and projection scaling, thereby enhancing user experiences. This study focuses on improving 6DoF pose estimation using single RGB images of unknown camera [...] Read more.
Accurate 6DoF (degrees of freedom) pose and focal length estimation are important in extended reality (XR) applications, enabling precise object alignment and projection scaling, thereby enhancing user experiences. This study focuses on improving 6DoF pose estimation using single RGB images of unknown camera metadata. Estimating the 6DoF pose and focal length from an uncontrolled RGB image, obtained from the internet, is challenging because it often lacks crucial metadata. Existing methods such as FocalPose and Focalpose++ have made progress in this domain but still face challenges due to the projection scale ambiguity between the translation of an object along the z-axis (tz) and the camera’s focal length. To overcome this, we propose a two-stage strategy that decouples the projection scaling ambiguity in the estimation of z-axis translation and focal length. In the first stage, tz is set arbitrarily, and we predict all the other pose parameters and focal length relative to the fixed tz. In the second stage, we predict the true value of tz while scaling the focal length based on the tz update. The proposed two-stage method reduces projection scale ambiguity in RGB images and improves pose estimation accuracy. The iterative update rules constrained to the first stage and tailored loss functions including Huber loss in the second stage enhance the accuracy in both 6DoF pose and focal length estimation. Experimental results using benchmark datasets show significant improvements in terms of median rotation and translation errors, as well as better projection accuracy compared to the existing state-of-the-art methods. In an evaluation across the Pix3D datasets (chair, sofa, table, and bed), the proposed two-stage method improves projection accuracy by approximately 7.19%. Additionally, the incorporation of Huber loss resulted in a significant reduction in translation and focal length errors by 20.27% and 6.65%, respectively, in comparison to the Focalpose++ method. Full article
(This article belongs to the Special Issue Computer Vision and Virtual Reality: Technologies and Applications)
Show Figures

Figure 1

Figure 1
<p>Projection of an object onto the image plane of a pinhole camera using perspective projection.</p>
Full article ">Figure 2
<p>Initial position and orientation of the real-world chair and the image plane based on ground truth values.</p>
Full article ">Figure 3
<p>Change of the projection scale of the image after setting <math display="inline"><semantics> <msub> <mi>t</mi> <mi>z</mi> </msub> </semantics></math> to an arbitrary value.</p>
Full article ">Figure 4
<p>Obtaining the same projection size of the chair by re-scaling the focal length relative to the adjustment of <math display="inline"><semantics> <msub> <mi>t</mi> <mi>z</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Two-stage approach for predicting the 6DoF pose estimation and focal length from a single uncontrolled RGB image.</p>
Full article ">Figure 6
<p>Comparison of the outputs from the proposed method with Focalpose [<a href="#B13-sensors-24-05474" class="html-bibr">13</a>] and Focalpose++ [<a href="#B14-sensors-24-05474" class="html-bibr">14</a>] using Pix3D dataset. Subfigures (<b>a</b>–<b>t</b>) represents different classes of chair, sofa, bed and table of Pix3D Dataset. Metadata of these images are not available during the inference time.</p>
Full article ">Figure 7
<p>(<b>a</b>) Input single RGB image, (<b>b</b>) prediction from Focalpose [<a href="#B13-sensors-24-05474" class="html-bibr">13</a>], (<b>c</b>) prediction from the proposed work (Stage II output), (<b>d</b>) outputs by employing multiple refiner iterations to Stage II of the proposed approach. The green-colored contours represent the predicted pose during each iteration in the refiner of Stage II, and the red colored contour represent the ground truth.</p>
Full article ">Figure 8
<p>Distribution of <math display="inline"><semantics> <msub> <mi>t</mi> <mi>z</mi> </msub> </semantics></math> across different classes in Pix3D dataset.</p>
Full article ">
Back to TopTop