[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,076)

Search Parameters:
Keywords = image sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 5758 KiB  
Article
Phenological Monitoring of Irrigated Sugarcane Using Google Earth Engine, Time Series, and TIMESAT in the Brazilian Semi-arid
by Diego Rosyur Castro Manrique, Pabrício Marcos Oliveira Lopes, Cristina Rodrigues Nascimento, Eberson Pessoa Ribeiro and Anderson Santos da Silva
AgriEngineering 2024, 6(4), 3799-3822; https://doi.org/10.3390/agriengineering6040217 (registering DOI) - 18 Oct 2024
Abstract
Monitoring sugarcane phenology is essential since the globalized market requires reliable information on the quantity of raw materials for the industrial production of sugar and alcohol. In this context, the general objective of this study was to evaluate the phenological seasonality of the [...] Read more.
Monitoring sugarcane phenology is essential since the globalized market requires reliable information on the quantity of raw materials for the industrial production of sugar and alcohol. In this context, the general objective of this study was to evaluate the phenological seasonality of the sugarcane varieties SP 79-1011 and VAP 90-212 observed from the NDVI time series over 19 years (2001–2020) from global databases. In addition, this research had the following specific objectives: (i) to estimate phenological parameters (Start of Season (SOS), End of Season (EOS), Length of Season (LOS), and Peak of Season (POS)) using TIMESAT software in version 3.3 applied to the NDVI time series over 19 years; (ii) to characterize the land use and land cover obtained from the MapBiomas project; (iii) to analyze rainfall variability; and (iv) to validate the sugarcane harvest date (SP 79-1011). This study was carried out in sugarcane growing areas in Juazeiro, Bahia, Brazil. The results showed that the NDVI time series did not follow the rainfall in the region. The sugarcane areas advanced over the savanna formation (Caatinga), reducing them to remnants along the irrigation channels. The comparison of the observed harvest dates of the SP 79-1011 variety to the values estimated with the TIMESAT software showed an excellent fit of 0.99. The mean absolute error in estimating the sugarcane harvest date was approximately ten days, with a performance index of 0.99 and a correlation coefficient of 0.99, significant at a 5% confidence level. The TIMESAT software was able to estimate the phenological parameters of sugarcane using MODIS sensor images processed on the Google Earth Engine platform during the evaluated period (2001 to 2020). Full article
Show Figures

Figure 1

Figure 1
<p>Map of the sugarcane with the physical boundaries in RGB (red, green and blue) color composite Landsat-8 and the location under study.</p>
Full article ">Figure 2
<p>Graphical abstract steps for obtaining the phenological metrics. where: SOS = Start of Season, EOS = End Of Season, LOS = Length of the Season and POS = Peak of Season. Source: Adapted of Rodigheri et al. [<a href="#B40-agriengineering-06-00217" class="html-bibr">40</a>].</p>
Full article ">Figure 3
<p>TIMESAT software modules for processing NDVI time series in the TIMESAT software module.</p>
Full article ">Figure 4
<p>Application of the Savitsky–Golay filter (red curve) for a time series of NDVI scaled (black curve) as a function of time (days) to estimate phenological parameters: points (a) and (b) mark, respectively, start and end of the season, points (c) and (d) give the 80% levels, (e) displays the point with the maximum value, (f) displays the seasonal amplitude, (g) the seasonal length, and (h) and (i) are integrals showing the cumulative effect of vegetation during the season. Source: Jönsson and Eklundh [<a href="#B48-agriengineering-06-00217" class="html-bibr">48</a>].</p>
Full article ">Figure 5
<p>Meteorological data from the Meteorology Laboratory (LabMet) automatic weather station for the period 2008 to 2012, Juazeiro, Bahia, Brazil.</p>
Full article ">Figure 6
<p>Classification of the land use and land cover of the watershed using MapBiomas in its Collection 6 (2006–2012).</p>
Full article ">Figure 7
<p>Temporal distribution of the area cultivated with sugarcane in the watershed from 2001 to 2020 in Juazeiro, Bahia, Brazil.</p>
Full article ">Figure 8
<p>MODIS NDVI (2001–2020) for the total area and rainfall of Labmet Juazeiro (2008–2020) time series.</p>
Full article ">Figure 9
<p>NDVI time series and Savitsky–Golay filter for the sugarcane total area. The dots represent the start (in blue) and the end (in yellow) of the sugarcane phenological cycles.</p>
Full article ">Figure 10
<p>Sugarcane agricultural calendar in the test area from the SP 79-1011 and VAP 90-212. In blue, months referring to the phenological phases of sugarcane.</p>
Full article ">Figure 11
<p>Comparison of variety SP 79-1011 harvest dates observed with estimated values with TIMESAT software for the test area between 2006 to 2012.</p>
Full article ">
25 pages, 13404 KiB  
Article
Drone SAR Imaging for Monitoring an Active Landslide Adjacent to the M25 at Flint Hall Farm
by Anthony Carpenter, James A. Lawrence, Philippa J. Mason, Richard Ghail and Stewart Agar
Remote Sens. 2024, 16(20), 3874; https://doi.org/10.3390/rs16203874 (registering DOI) - 18 Oct 2024
Abstract
Flint Hall Farm in Godstone, Surrey, UK, is situated adjacent to the London Orbital Motorway, or M25, and contains several landslide systems which pose a significant geohazard risk to this critical infrastructure. The site has been routinely monitored by geotechnical engineers following a [...] Read more.
Flint Hall Farm in Godstone, Surrey, UK, is situated adjacent to the London Orbital Motorway, or M25, and contains several landslide systems which pose a significant geohazard risk to this critical infrastructure. The site has been routinely monitored by geotechnical engineers following a landslide that encroached onto the hard shoulder in December 2000; current in situ instrumentation includes inclinometers and piezoelectric sensors. Interferometric Synthetic Aperture Radar (InSAR) is an active remote sensing technique that can quantify millimetric rates of Earth surface and structural deformation, typically utilising satellite data, and is ideal for monitoring landslide movements. We have developed the hardware and software for an Unmanned Aerial Vehicle (UAV), or drone radar system, for improved operational flexibility and spatial–temporal resolutions in the InSAR data. The hardware payload includes an industrial-grade DJI drone, a high-performance Ettus Software Defined Radar (SDR), and custom Copper Clad Laminate (CCL) radar horn antennas. The software utilises Frequency Modulated Continuous Wave (FMCW) radar at 5.4 GHz for raw data collection and a Range Migration Algorithm (RMA) for focusing the data into a Single Look Complex (SLC) Synthetic Aperture Radar (SAR) image. We present the first SAR image acquired using the drone radar system at Flint Hall Farm, which provides an improved spatial resolution compared to satellite SAR. Discrete targets on the landslide slope, such as corner reflectors and the in situ instrumentation, are visible as bright pixels, with their size and positioning as expected; the surrounding grass and vegetation appear as natural speckles. Drone SAR imaging is an emerging field of research, given the necessary and recent technological advancements in drones and SDR processing power; as such, this is a novel achievement, with few authors demonstrating similar systems. Ongoing and future work includes repeat-pass SAR data collection and developing the InSAR processing chain for drone SAR data to provide meaningful deformation outputs for the landslides and other geotechnical hazards and infrastructure. Full article
Show Figures

Figure 1

Figure 1
<p>Flint Hall Farm study area (hatched red pattern), with annotated M25, Godstone, and regional UK overview map.</p>
Full article ">Figure 2
<p>Flint Hall Farm study area (hatched red pattern), with 1 m LiDAR Composite DTM for elevation and labelled contour lines.</p>
Full article ">Figure 3
<p>Zones and sub-zones at the Flint Hall Farm site, including the Flint Hall Farm Zone, Zones 1–3 (red); the Midslope Zone, Zones 1–2 (blue); and, the Rooks Nest Farm Zone, Zones 1–4 (yellow) [<a href="#B25-remotesensing-16-03874" class="html-bibr">25</a>]. The zone colour shading is more transparent than the legend colours for surface feature visibility.</p>
Full article ">Figure 4
<p>Landslide extents at the Flint Hall Farm site, including the Flint Hall Farm, Flint Hall Farm South, and Rooks Nest Farm Landslides [<a href="#B25-remotesensing-16-03874" class="html-bibr">25</a>].</p>
Full article ">Figure 5
<p>Simplified geological map of the study area, with Flint Hall Farm (red circle), and a geological cross-section for line A–A′ (green circle). Adapted from [<a href="#B26-remotesensing-16-03874" class="html-bibr">26</a>].</p>
Full article ">Figure 6
<p>Schematic of the Flint Hall Farm landslide, which occurred on 19 December 2000 [<a href="#B1-remotesensing-16-03874" class="html-bibr">1</a>].</p>
Full article ">Figure 7
<p>Geological cross-section schematic of the Flint Hall Farm landslide, which occurred on 19 December 2000 [<a href="#B1-remotesensing-16-03874" class="html-bibr">1</a>].</p>
Full article ">Figure 8
<p>Corner reflectors at Flint Hall Farm, with annotated M25.</p>
Full article ">Figure 9
<p>Photographs of the CCL horn antennas: (<b>a</b>) external view; (<b>b</b>) internal view.</p>
Full article ">Figure 10
<p>Drone radar payload, with the CCL horn antennas, E312 SDR and 3D-printed connection stabiliser for the SMB-SMA connectors, and Raspberry Pi (on the back).</p>
Full article ">Figure 11
<p>Drone radar payload attached to the drone at Flint Hall Farm.</p>
Full article ">Figure 12
<p>FMCW modulation: (<b>a</b>) amplitude domain; (<b>b</b>) frequency domain, where transmission (Tx) is red, and reception (Rx) is green.</p>
Full article ">Figure 13
<p>FMCW radar block diagram.</p>
Full article ">Figure 14
<p>RMA block diagram.</p>
Full article ">Figure 15
<p>Schematic of drone flight geometry with corner reflector target.</p>
Full article ">Figure 16
<p>Photograph of the drone radar system in-flight at Flint Hall Farm.</p>
Full article ">Figure 17
<p>(<b>a</b>) SLC SAR image from Flint Hall Farm, with annotated flight path and circled targets; the latter includes the corner reflectors (red), and other fenced areas for in situ instrumentation (yellow, blue and pink); (<b>b</b>) Google Street View imagery of Flint Hall Farm, with annotated flight path, and corresponding circled targets, as indicated by the arrows connecting (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 18
<p>Average SAR amplitude for Flint Hall Farm (white boundary) from September 2021 to September 2023, with annotated M25 and Godstone. The zoomed image boundary is denoted by the red box. The corner reflector and in situ instrumental pixels are circled in blue.</p>
Full article ">Figure 19
<p>Side-by-side comparison of (<b>a</b>) average SAR amplitude for Flint Hall Farm (white boundary) from September 2021 to September 2023; the corner reflector and in situ instrumental pixels are circled in blue, and (<b>b</b>) drone SAR image from Flint Hall Farm, with circled targets, including the corner reflectors (red), and other fenced areas for in situ instrumentation (yellow, blue, and pink).</p>
Full article ">
16 pages, 9232 KiB  
Article
DSM Reconstruction from Uncalibrated Multi-View Satellite Stereo Images by RPC Estimation and Integration
by Dong-Uk Seo and Soon-Yong Park
Remote Sens. 2024, 16(20), 3863; https://doi.org/10.3390/rs16203863 - 17 Oct 2024
Viewed by 219
Abstract
In this paper, we propose a 3D Digital Surface Model (DSM) reconstruction method from uncalibrated Multi-view Satellite Stereo (MVSS) images, where Rational Polynomial Coefficient (RPC) sensor parameters are not available. While recent investigations have introduced several techniques to reconstruct high-precision and high-density DSMs [...] Read more.
In this paper, we propose a 3D Digital Surface Model (DSM) reconstruction method from uncalibrated Multi-view Satellite Stereo (MVSS) images, where Rational Polynomial Coefficient (RPC) sensor parameters are not available. While recent investigations have introduced several techniques to reconstruct high-precision and high-density DSMs from MVSS images, they inherently depend on the use of geo-corrected RPC sensor parameters. However, RPC parameters from satellite sensors are subject to being erroneous due to inaccurate sensor data. In addition, due to the increasing data availability from the internet, uncalibrated satellite images can be easily obtained without RPC parameters. This study proposes a novel method to reconstruct a 3D DSM from uncalibrated MVSS images by estimating and integrating RPC parameters. To do this, we first employ a structure from motion (SfM) and 3D homography-based geo-referencing method to reconstruct an initial DSM. Second, we sample 3D points from the initial DSM as references and reproject them to the 2D image space to determine 3D–2D correspondences. Using the correspondences, we directly calculate all RPC parameters. To overcome the memory shortage problem while running the large size of satellite images, we also propose an RPC integration method. Image space is partitioned to multiple tiles, and RPC estimation is performed independently in each tile. Then, all tiles’ RPCs are integrated into the final RPC to represent the geometry of the whole image space. Finally, the integrated RPC is used to run a true MVSS pipeline to obtain the 3D DSM. The experimental results show that the proposed method can achieve 1.455 m Mean Absolute Error (MAE) in the height map reconstruction from multi-view satellite benchmark datasets. We also show that the proposed method can be used to reconstruct a geo-referenced 3D DSM from uncalibrated and freely available Google Earth imagery. Full article
Show Figures

Figure 1

Figure 1
<p>Pipeline of the proposed method. (Reference: GEMVS [<a href="#B14-remotesensing-16-03863" class="html-bibr">14</a>], 3D to 2D correspondence search [<a href="#B17-remotesensing-16-03863" class="html-bibr">17</a>], MS2P [<a href="#B25-remotesensing-16-03863" class="html-bibr">25</a>]).</p>
Full article ">Figure 2
<p>3D-to-2D projection process for finding correspondence.</p>
Full article ">Figure 3
<p>2D-to-3D correspondence search process. For example, an image space is divided into <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> tiles. Red-color points are uniform samples in image <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math>. A sampled point is reprojected to geo-referencing space by each tile’s inverse RPCs to obtain <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">P</mi> </mrow> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math>. Then, all <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">P</mi> </mrow> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math> are weighted averaged by the distance of the point to each tile center to obtain the final correspondence <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">P</mi> </mrow> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>A simplified flow diagram of MS2P. The baseline algorithm for MVS is EnSoft3D [<a href="#B29-remotesensing-16-03863" class="html-bibr">29</a>], and it is modified to use the estimated RPC parameters.</p>
Full article ">Figure 5
<p>Results of the proposed method on GE imagery. The area of the first row is Sigiriya, Sri-Lanka (7°57′22″N 80°45′32″E), and second row is Sydney, Australia (33°51′25″S 151°12′42″E).</p>
Full article ">Figure 6
<p>Comparison of the height map results from uncalibrated satellite images using the pin-hole camera model with GEMVS [<a href="#B17-remotesensing-16-03863" class="html-bibr">17</a>] and the RPC model with MS2P [<a href="#B25-remotesensing-16-03863" class="html-bibr">25</a>].</p>
Full article ">Figure 7
<p>Comparison of the DSM reconstruction of two camera models, COLMAP and GEMVS, with the pin-hole model and MS2P with the estimated RPC model.</p>
Full article ">Figure 8
<p>MAE and RMSE error of the height map compared with the GT DSM from the DFC19 dataset.</p>
Full article ">Figure 9
<p>Error analysis of the OMA_284 tile. From the left, reconstructed DSM, GT model, and error map.</p>
Full article ">
14 pages, 1631 KiB  
Review
Targeting Sodium in Heart Failure
by Filippos Triposkiadis, Andrew Xanthopoulos and John Skoularigis
J. Pers. Med. 2024, 14(10), 1064; https://doi.org/10.3390/jpm14101064 - 17 Oct 2024
Viewed by 136
Abstract
A dominant event determining the course of heart failure (HF) includes the disruption of the delicate sodium (Na+) and water balance leading to (Na+) and water retention and edema formation. Although incomplete decongestion adversely affects outcomes, it is unknown [...] Read more.
A dominant event determining the course of heart failure (HF) includes the disruption of the delicate sodium (Na+) and water balance leading to (Na+) and water retention and edema formation. Although incomplete decongestion adversely affects outcomes, it is unknown whether interventions directly targeting (Na+), such as strict dietary (Na+) restriction, intravenous hypertonic saline, and diuretics, reverse this effect. As a result, it is imperative to implement (Na+)-targeting interventions in selected HF patients with established congestion on top of quadruple therapy with angiotensin receptor neprilysin inhibitor, β-adrenergic receptor blocker, mineralocorticoid receptor antagonist, and sodium glucose cotransporter 2 inhibitor, which dramatically improves outcomes. The limited effectiveness of (Na+)-targeting treatments may be partly due to the fact that the current metrics of HF severity have a limited capacity of foreseeing and averting episodes of congestion and guiding (Na+)-targeting treatments, which often leads to dysnatremias, adversely affecting outcomes. Recent evidence suggests that spot urinary sodium measurements may be used as a guide to monitor (Na+)-targeting interventions both in chronic and acute HF. Further, the classical (2)-compartment model of (Na+) storage has been displaced by the (3)-compartment model emphasizing the non-osmotic accumulation of (Na+), chiefly in the skin. 23(Na+) magnetic resonance imaging (MRI) enables the accurate and reliable quantification of tissue (Na+). Another promising approach enabling tissue (Na+) monitoring is based on wearable devices employing ion-selective electrodes for electrolyte detection, including (Na+) and (Cl). Undoubtably, further studies using 23(Na+)-MRI technology and wearable sensors are required to learn more about the clinical significance of tissue (Na+) storage and (Na+)-related mechanisms of morbidity and mortality in HF. Full article
(This article belongs to the Section Disease Biomarker)
Show Figures

Figure 1

Figure 1
<p>The (3)-compartment model. Sodium is stored in tissues (e.g., skin or muscles) in addition to the intravascular and interstitial compartments. The third compartment sodium is osmotically inactive and can be either returned to the intravascular compartment through lymphatic vessels or excreted through the sweat. (This figure is adapted from Ref. [<a href="#B30-jpm-14-01064" class="html-bibr">30</a>]. Polychronopoulou E, Braconnier P, and Burnier M (2019) <span class="html-italic">New Insights on the Role of Sodium in the Physiological Regulation of Blood Pressure and Development of Hypertension</span>. Front. Cardiovasc. Med. 6:136.)</p>
Full article ">Figure 2
<p>Mechanisms of damage to the glycocalyx induced by salt, resulting in hypertension and cardiovascular disease. High salt intake impairs glycocalyx and induces inflammation, oxidative stress, and immune activation, leading to the development of hypertension and cardiovascular disease. ROS, reactive oxygen species; ENaC, epithelial sodium channel; NADPH, reduced nicotinamide adenine dinucleotide phosphate; NLRP3, NLR Family Pyrin Domain Containing 3; NF-Kb, Nuclear factor kappa-light-chain-enhancer of activated B cells; IsoLGs, Islovuglandins; TNF-α, tumor necrosis factor alpha; IFN-γ, Interferon gamma. (This figure is adapted from Ref. [<a href="#B42-jpm-14-01064" class="html-bibr">42</a>]. Sembajwe, L.F.; Ssekandi, A.M.; Namaganda, A.; Muwonge, H.; Kasolo, J.N.; Kalyesubula, R.; Nakimuli, A.; Naome, M.; Patel, K.P.; Masenga, S.K.; et al. <span class="html-italic">Glycocalyx–Sodium Interaction in Vascular Endothelium</span>. Nutrients 2023, 15, 2873).</p>
Full article ">Figure 3
<p>Mechanisms of diuretic resistance and hypertonic saline (HS). The lower panel depicts the apical membrane of the tubular cells of the thick ascending limb of the Henle loop (<b>A</b>) under physiological conditions, the Na<sup>+</sup>/K<sup>+</sup>/Cl<sup>−</sup> cotransporter 2 (NKCC2), which is blocked by loop diuretics, contributes to the reabsorption of up to 25% of filtered sodium. (<b>B</b>) In cases with diuretic resistance, sodium reabsorption increases in the different segments of the nephron, resulting in lower concentration of sodium in the tubular lumen of the Henle loop and, therefore, less sodium excretion in urine and less diuresis. (<b>C</b>) With the use of HS, sodium concentration increases in the tubular lumen, potentiating the action of loop diuretics and attenuating diuretic resistance. HS: hypertonic saline. (This figure is adapted from Ref. [<a href="#B58-jpm-14-01064" class="html-bibr">58</a>]. <span class="html-italic">Hypertonic Saline Solution: How, Why, and for Whom?</span> Ciro Mancilha Murad1 and Fabiana Goulart Marcondes-Braga. ABC Heart Fail Cardiomyop. 2023; 3(2):e20230078).</p>
Full article ">Figure 4
<p>Disease-modifying treatment with renin–angiotensin–aldosterone inhibitors (angiotensin-converting enzyme inhibitors/angiotensin receptor blockers/angiotensin receptor–neprilysin inhibitors/mineralocorticoid receptor antagonists), B-adrenergic blockers, and especially sodium–glucose transporter 2 inhibitors, which additionally inhibit proximal tubule sodium (Na<sup>+</sup>) reabsorption, is the cornerstone for the prevention and treatment of (Na<sup>+</sup>) retention leading to congestion in HF. Interventions directly targeting (Na<sup>+</sup>) such as strict dietary sodium restriction, intravenous hypertonic saline (IV saline), and diuretics should be additionally implemented in selected patients with florid congestion to alleviate symptoms as they are not devoid of adverse effects and their effect on outcome is doubtful.</p>
Full article ">
18 pages, 9570 KiB  
Article
A Depth Awareness and Learnable Feature Fusion Network for Enhanced Geometric Perception in Semantic Correspondence
by Fazeng Li, Chunlong Zou, Juntong Yun, Li Huang, Ying Liu, Bo Tao and Yuanmin Xie
Sensors 2024, 24(20), 6680; https://doi.org/10.3390/s24206680 - 17 Oct 2024
Viewed by 206
Abstract
Deep learning is becoming the most widely used technology for multi-sensor data fusion. Semantic correspondence has recently emerged as a foundational task, enabling a range of downstream applications, such as style or appearance transfer, robot manipulation, and pose estimation, through its ability to [...] Read more.
Deep learning is becoming the most widely used technology for multi-sensor data fusion. Semantic correspondence has recently emerged as a foundational task, enabling a range of downstream applications, such as style or appearance transfer, robot manipulation, and pose estimation, through its ability to provide robust correspondence in RGB images with semantic information. However, current representations generated by self-supervised learning and generative models are often limited in their ability to capture and understand the geometric structure of objects, which is significant for matching the correct details in applications of semantic correspondence. Furthermore, efficiently fusing these two types of features presents an interesting challenge. Achieving harmonious integration of these features is crucial for improving the expressive power of models in various tasks. To tackle these issues, our key idea is to integrate depth information from depth estimation or depth sensors into feature maps and leverage learnable weights for feature fusion. First, depth information is used to model pixel-wise depth distributions, assigning relative depth weights to feature maps for perceiving an object’s structural information. Then, based on a contrastive learning optimization objective, a series of weights are optimized to leverage feature maps from self-supervised learning and generative models. Depth features are naturally embedded into feature maps, guiding the network to learn geometric structure information about objects and alleviating depth ambiguity issues. Experiments on the SPair-71K and AP-10K datasets show that the proposed method achieves scores of 81.8 and 83.3 on the percentage of correct keypoints (PCK) at the 0.1 level, respectively. Our approach not only demonstrates significant advantages in experimental results but also introduces the depth awareness module and a learnable feature fusion module, which enhances the understanding of object structures through depth information and fully utilizes features from various pre-trained models, offering new possibilities for the application of deep learning in RGB and depth data fusion technologies. We will also continue to focus on accelerating model inference and optimizing model lightweighting, enabling our model to operate at a faster speed. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The previous work [<a href="#B39-sensors-24-06680" class="html-bibr">39</a>] (<b>a</b>) found it challenging to differentiate between the front and rear wheels of motorcycles, and our method (<b>b</b>) aids in alleviating this issue. Green lines represent correct matches, and red is incorrect.</p>
Full article ">Figure 2
<p>An overview of our method pipeline.</p>
Full article ">Figure 3
<p>Pipeline of latent depth awareness module.</p>
Full article ">Figure 4
<p>Comparison of PCA from the feature map before and after processing through this module. From left to right: original image, PCA of original feature map, deep feature information, and final result.</p>
Full article ">Figure 5
<p>Framework of the feature fusion module.</p>
Full article ">Figure 6
<p>Qualitative comparison of dog, horse and sheep categories. Green lines represent correct matches, and red is incorrect. (<b>a</b>) Result of CATs++ [<a href="#B58-sensors-24-06680" class="html-bibr">58</a>], (<b>b</b>) result of DHF [<a href="#B38-sensors-24-06680" class="html-bibr">38</a>], (<b>c</b>) result of SD+DINO [<a href="#B39-sensors-24-06680" class="html-bibr">39</a>], (<b>d</b>) our result. Green lines represent correct matches, and red is incorrect.</p>
Full article ">Figure 7
<p>Qualitative comparison of bus, car, and train categories. Green lines represent correct matches, and red is incorrect. (<b>a</b>) Result of CATs++ [<a href="#B58-sensors-24-06680" class="html-bibr">58</a>], (<b>b</b>) result of DHF [<a href="#B38-sensors-24-06680" class="html-bibr">38</a>], (<b>c</b>) result of SD+DINO [<a href="#B39-sensors-24-06680" class="html-bibr">39</a>], (<b>d</b>) our result. Green lines represent correct matches, and red is incorrect.</p>
Full article ">Figure 8
<p>Qualitative comparison of person and TV categories. Green lines represent correct matches, and red is incorrect. (<b>a</b>) Result of CATs++ [<a href="#B58-sensors-24-06680" class="html-bibr">58</a>], (<b>b</b>) result of DHF [<a href="#B38-sensors-24-06680" class="html-bibr">38</a>], (<b>c</b>) result of SD+DINO [<a href="#B39-sensors-24-06680" class="html-bibr">39</a>], (<b>d</b>) our result. Green lines represent correct matches, and red is incorrect.</p>
Full article ">Figure 9
<p>The limitation of scale differences.</p>
Full article ">
32 pages, 25887 KiB  
Review
Deep-Learning for Change Detection Using Multi-Modal Fusion of Remote Sensing Images: A Review
by Souad Saidi, Soufiane Idbraim, Younes Karmoude, Antoine Masse and Manuel Arbelo
Remote Sens. 2024, 16(20), 3852; https://doi.org/10.3390/rs16203852 - 17 Oct 2024
Viewed by 348
Abstract
Remote sensing images provide a valuable way to observe the Earth’s surface and identify objects from a satellite or airborne perspective. Researchers can gain a more comprehensive understanding of the Earth’s surface by using a variety of heterogeneous data sources, including multispectral, hyperspectral, [...] Read more.
Remote sensing images provide a valuable way to observe the Earth’s surface and identify objects from a satellite or airborne perspective. Researchers can gain a more comprehensive understanding of the Earth’s surface by using a variety of heterogeneous data sources, including multispectral, hyperspectral, radar, and multitemporal imagery. This abundance of different information over a specified area offers an opportunity to significantly improve change detection tasks by merging or fusing these sources. This review explores the application of deep learning for change detection in remote sensing imagery, encompassing both homogeneous and heterogeneous scenes. It delves into publicly available datasets specifically designed for this task, analyzes selected deep learning models employed for change detection, and explores current challenges and trends in the field, concluding with a look towards potential future developments. Full article
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram.</p>
Full article ">Figure 2
<p>Year-wise publications from 2017 to 2024.</p>
Full article ">Figure 3
<p>Global distribution of publications.</p>
Full article ">Figure 4
<p>Feature extraction strategy. (<b>a</b>) Early fusion; (<b>b</b>) Late fusion; (<b>c</b>) Multiple fusion.</p>
Full article ">Figure 5
<p>Structures of models. (<b>a</b>) Single Stream Network. (<b>b</b>) General Siamese network structure. (<b>c</b>) Double-Stream UNet.</p>
Full article ">Figure 6
<p>Structures of super resolution change detection methods.</p>
Full article ">
18 pages, 15800 KiB  
Article
Research on Precise Attitude Measurement Technology for Satellite Extension Booms Based on the Star Tracker
by Peng Sang, Wenbo Liu, Yang Cao, Hongbo Xue and Baoquan Li
Sensors 2024, 24(20), 6671; https://doi.org/10.3390/s24206671 - 16 Oct 2024
Viewed by 282
Abstract
This paper reports the successful application of a self-developed, miniaturized, low-power nano-star tracker for precise attitude measurement of a 5-m-long satellite extension boom. Such extension booms are widely used in space science missions to extend and support payloads like magnetometers. The nano-star tracker, [...] Read more.
This paper reports the successful application of a self-developed, miniaturized, low-power nano-star tracker for precise attitude measurement of a 5-m-long satellite extension boom. Such extension booms are widely used in space science missions to extend and support payloads like magnetometers. The nano-star tracker, based on a CMOS image sensor, weighs 150 g (including the baffle), has a total power consumption of approximately 0.85 W, and achieves a pointing accuracy of about 5 arcseconds. It is paired with a low-cost, commercial lens and utilizes automated calibration techniques for measurement correction of the collected data. This system has been successfully applied to the precise attitude measurement of the 5-m magnetometer boom on the Chinese Advanced Space Technology Demonstration Satellite (SATech-01). Analysis of the in-orbit measurement data shows that within shadowed regions, the extension boom remains stable relative to the satellite, with a standard deviation of 30′′ (1σ). The average Euler angles for the “X-Y-Z” rotation sequence from the extension boom to the satellite are [−89.49°, 0.08°, 90.11°]. In the transition zone from shadow to sunlight, influenced by vibrations and thermal factors during satellite attitude adjustments, the maximum angular fluctuation of the extension boom relative to the satellite is approximately ±2°. These data and the accuracy of the measurements can effectively correct magnetic field vector measurements. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Self-developed nano-star tracker.</p>
Full article ">Figure 2
<p>Electronic functional block diagram of the star tracker.</p>
Full article ">Figure 3
<p>Photograph of a star tracker circuit board.</p>
Full article ">Figure 4
<p>Schematic diagram of the multi-tasking pipeline of the star tracker.</p>
Full article ">Figure 5
<p>Application layer software thread of the star tracker.</p>
Full article ">Figure 6
<p>Composition of automatic calibration system for star tracker.</p>
Full article ">Figure 7
<p>Actual picture of the star tracker calibration device: (<b>a</b>) overall view; (<b>b</b>) working state.</p>
Full article ">Figure 8
<p>The process diagram of residual calibration: (<b>a</b>) image of the marked star points; (<b>b</b>) calibration residual diagram.</p>
Full article ">Figure 9
<p>Static multi-satellite simulator experimental test diagram: (<b>a</b>) the static multi-satellite simulation test device; (<b>b</b>) the test results.</p>
Full article ">Figure 10
<p>Ground test diagram: (<b>a</b>) the joint field stargazing experiment device of the probe assembly; (<b>b</b>) measurement results of the star tracker, where Q1, Q2, Q3, and Q4 represent the tracker’s output attitude quaternions.</p>
Full article ">Figure 11
<p>Measured star point image data of 100 ms exposure of star tracker—“Casiopeia”: (<b>a</b>) image collected by the star sensor; (<b>b</b>) the star point data analyzed in the software, which corresponds one-to-one with the identified stars; (<b>c</b>) starry sky image of the “Cassiopeia” position in the Stellarium software (v1.28).</p>
Full article ">Figure 12
<p>Remanence test experiment.</p>
Full article ">Figure 13
<p>Assembly diagram of the star tracker on the Chinese Advanced Space Technology Demonstration Satellite: (<b>a</b>) the probe assembly on the extension boom, with the red light shield covering the precise attitude measurement component of the nanosatellite star tracker developed in this study; (<b>b</b>) assembly diagram of the star tracker and extension rod structure on the entire satellite; (<b>c</b>) the coordinate system relationships of the satellite’s extension boom, where the satellite platform’s boom base is defined as the <span class="html-italic">XY</span> plane and the boom’s extension direction is defined as the <span class="html-italic">Z</span>-axis.</p>
Full article ">Figure 14
<p>Conversion Euler angles from NST system by the star tracker to satellite system, the time range of the data in the figure is UTC: 13 February 2023 9:09:10 to 12:05:50.</p>
Full article ">Figure 15
<p>Quaternions collected by the star tracker: (<b>a</b>) data sourced from the star tracker mounted on the satellite body; (<b>b</b>) data sourced from the star tracker on the extension boom. The time range of the data in the figure is UTC: 13 February 2023 9:09:10 to 12:05:50.</p>
Full article ">
16 pages, 10398 KiB  
Article
U-Net Semantic Segmentation-Based Calorific Value Estimation of Straw Multifuels for Combined Heat and Power Generation Processes
by Lianming Li, Zhiwei Wang and Defeng He
Energies 2024, 17(20), 5143; https://doi.org/10.3390/en17205143 - 16 Oct 2024
Viewed by 258
Abstract
This paper proposes a system for real-time estimation of the calorific value of mixed straw fuels based on an improved U-Net semantic segmentation model. This system aims to address the uncertainty in heat and power generation per unit time in combined heat and [...] Read more.
This paper proposes a system for real-time estimation of the calorific value of mixed straw fuels based on an improved U-Net semantic segmentation model. This system aims to address the uncertainty in heat and power generation per unit time in combined heat and power generation (CHPG) systems caused by fluctuations in the calorific value of straw fuels. The system integrates an industrial camera, moisture detector, and quality sensors to capture images of the multi-fuel straw. It applies the improved U-Net segmentation network for semantic segmentation of the images, accurately calculating the proportion of each type of straw. The improved U-Net network introduces a self-attention mechanism in the skip connections of the final layer of the encoder, replacing traditional convolutions by depthwise separable convolutions, as well as replacing the traditional convolutional bottleneck layers with Transformer encoder. These changes ensure that the model achieves high segmentation accuracy and strong generalization capability while maintaining good real-time performance. The semantic segmentation results of the straw images are used to calculate the proportions of different types of straw and, combined with moisture content and quality data, the calorific value of the mixed fuel is estimated in real time based on the elemental composition of each straw type. Validation using images captured from an actual thermal power plant shows that, under the same conditions, the proposed model has only a 0.2% decrease in accuracy compared to the traditional U-Net segmentation network, while the number of parameters is significantly reduced by 74%, and inference speed is improved 23%. Full article
(This article belongs to the Special Issue Application of New Technologies in Bioenergy and Biofuel Conversion)
Show Figures

Figure 1

Figure 1
<p>A schematic of the CHPG process.</p>
Full article ">Figure 2
<p>The configuration system for calorific value estimation.</p>
Full article ">Figure 3
<p>The proposed model architecture.</p>
Full article ">Figure 4
<p>Replacing the bottleneck with a Transformer encoder.</p>
Full article ">Figure 5
<p>Visualization of segmentation results of different methods, where green represents sesame, purple denotes corn straw, red indicates wheat, and blue signifies the background.</p>
Full article ">
13 pages, 6666 KiB  
Article
Measurement of Hydraulic Fracture Aperture by Electromagnetic Induction
by Mohsen Talebkeikhah, Alireza Moradi and Brice Lecampion
Sensors 2024, 24(20), 6660; https://doi.org/10.3390/s24206660 - 16 Oct 2024
Viewed by 249
Abstract
We present a new method for accurately measuring the aperture of a fluid-driven fracture. This method uses an eddy current probe located within a completion tool specifically designed to obtain the fracture aperture in the wellbore at the location where the fluid is [...] Read more.
We present a new method for accurately measuring the aperture of a fluid-driven fracture. This method uses an eddy current probe located within a completion tool specifically designed to obtain the fracture aperture in the wellbore at the location where the fluid is injected into the fracture. The probe induces an eddy current in a target object, producing a magnetic field that affects the overall magnetic field. It does not have any limitations with respect to fluid pressure and temperature within a large range, making it unlike other methods. We demonstrate the accuracy and performance of the sensor under laboratory conditions. A hydraulic fracture experiment in a porous sandstone is conducted and discussed. The obtained measurement of the evolution of the fracture inlet aperture by the eddy current probe during the multiple injection cycles performed provided robust information. The residual fracture aperture (after the test) measured by the probe is in line with estimations from image processing of X-ray CT scan images as well as a thin-section analysis of sub-parts of the fractured specimen. The robustness and accuracy of this electromagnetic induction probe demonstrated herein under laboratory conditions indicate an interesting potential for field deployment. Full article
(This article belongs to the Special Issue Electromagnetic Sensing and Its Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the operational mechanism of an eddy current (EC) probe.</p>
Full article ">Figure 2
<p>Schematic of the setup designed for calibrating the EC probe with the condition of the hydraulic fracturing experiment.</p>
Full article ">Figure 3
<p>EC probe readings and corresponding micrometer results obtained in the calibration device at various fluid pressures of 0.1, 1, 5, 10, 15, and 20 MPa.</p>
Full article ">Figure 4
<p>Schematic of the experimental setup for the propagation of hydraulic fracture under confinement, with details of the completion and the EC probe to measure fracture aperture at the fracture inlet.</p>
Full article ">Figure 5
<p>Hydraulic fracturing experiment in a porous sandstone. (<b>a</b>) Time evolution of the fracture aperture at the wellbore measured via the EC probe (in black), (<b>b</b>) time evolution of the fluid pressure in the well (in blue) and the injection rate (in red). Three injection cycles were performed. After a hold period following the last cycle, the applied confining stresses were released.</p>
Full article ">Figure 6
<p>(<b>a</b>) Sandstone sample sliced into four parts after the experiment, displaying the wellbore, completion tool, and the EC probe. (<b>b</b>) A closer view of the area surrounding the fracture inlet, where the EC probe is located. (<b>c</b>) The same section is exposed to UV light, revealing the fracture path and the leakoff area.</p>
Full article ">Figure 7
<p>(<b>a</b>) Thin-section image depicting the microstructure of the Molasse sandstone. (<b>b</b>) The 2D particle size distribution of the sandstone obtained from image processing of the thin section.</p>
Full article ">Figure 8
<p>Schematic diagram of the rock sample with core extraction and location of the corresponding CT scan images taken near the fracture inlet, in the middle part of the fracture, and near the fracture front. The corresponding core sample images are in the right section.</p>
Full article ">Figure 9
<p>Comparison of the aperture of the normal fracture to the mid-plane fracture with the vertical aperture of the fracture.</p>
Full article ">Figure 10
<p>Profile of the residual aperture of the fracture along the length of the core sample.</p>
Full article ">
13 pages, 6414 KiB  
Article
A Net Shape Profile Extraction Approach for Exploring the Forming Appearance of Inclined Thick-Walled Structures by Wire Arc Additive Manufacturing
by Yexing Zheng, Yongzhe Li, Yijun Zhou, Xiaoyu Wang and Guangjun Zhang
Micromachines 2024, 15(10), 1262; https://doi.org/10.3390/mi15101262 - 16 Oct 2024
Viewed by 244
Abstract
Wire arc additive manufacturing (WAAM) offers a viable solution for fabricating large-scale metallic parts, which contain various forms of inclined thick-walled structure. Due to the variety of heat dissipation conditions at different positions, the inclined thick-walled structure is a major challenge in fabrication [...] Read more.
Wire arc additive manufacturing (WAAM) offers a viable solution for fabricating large-scale metallic parts, which contain various forms of inclined thick-walled structure. Due to the variety of heat dissipation conditions at different positions, the inclined thick-walled structure is a major challenge in fabrication that may produce collapses and defects. However, there is a lack of effective sensing method for acquiring the forming appearance of individual beads in the structure. This paper proposes a novel approach for extracting individual bead profiles during the WAAM process. The approach utilizes a structured-laser sensor to capture the morphology of the surface before and after deposition, thereby enabling an accurate acquisition of the bead profile by integrating the laser stripes. Utilizing the proposed approach, the research investigated the forming mechanism of beads in inclined thick-walled components that were fabricated by various deposition parameters. The width of the overlapping area at the overhanging feature decreased as the layer number increased, while the height of the same area increased. The height of the overlapping area in each layer increased with an increase in deposition current and decreased when the deposition speed was increased. These phenomena suggest that the heat input is a major factor that influences the formation of the overhanging feature. Both the deposition current and deposition velocity influence heat input, and thereby have an effect in enhancing the geometrical accuracy of an overhanging feature. The experimental results indicate that the proposed approach facilitates morphology change investigation, providing a sufficient reference for optimizing deposition parameters. Full article
(This article belongs to the Special Issue Advanced Manufacturing Technology and Systems, 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Measuring the geometric information of deposit in an MLMB structure: (<b>a</b>) the mechanism of structured-laser sensing; (<b>b</b>) a typical laser stripe image; and (<b>c</b>) the formed component.</p>
Full article ">Figure 2
<p>The workflow of the algorithm developed for detecting geometries of beads.</p>
Full article ">Figure 3
<p>Preprocessing and integration of laser stripes before and after deposition: (<b>a</b>) before processing; (<b>b</b>) after processing; (<b>c</b>) before deposition; (<b>d</b>) after deposition, and (<b>e</b>) fused laser stripes.</p>
Full article ">Figure 4
<p>Standard structure and morphology characteristics of the structure.</p>
Full article ">Figure 5
<p>The implemented robotic wire arc additive manufacturing (WAAM) system.</p>
Full article ">Figure 6
<p>The formed inclined part and the extraction results: (<b>a</b>) formed part, and (<b>b</b>) extraction results.</p>
Full article ">Figure 7
<p>Width (<b>a</b>) and height (<b>b</b>) overlap ratios of individual beads.</p>
Full article ">Figure 8
<p>Section diagrams of the processed parts under different processing parameters: (<b>a</b>) 150 A + 4 mm/s; (<b>b</b>) 150 A + 5 mm/s; (<b>c</b>) 150 A + 6 mm/s; (<b>d</b>) 150 A + 7 mm/s; (<b>e</b>) 150 A + 8 mm/s; (<b>f</b>) 135 A + 6 mm/s, and (<b>g</b>) 165 A + 6 mm/s.</p>
Full article ">Figure 9
<p>Layer height overlap ratios of inclined components under different deposition parameters: (<b>a</b>) different currents and voltages, and (<b>b</b>) different speeds.</p>
Full article ">Figure 10
<p>Height of <span class="html-italic">B</span>(<span class="html-italic">n</span>,1) and <span class="html-italic">B</span>(<span class="html-italic">n</span>,2) of the inclined component.</p>
Full article ">Figure 11
<p>The influence of the interaction of beads on the shape formation process.</p>
Full article ">Figure 12
<p>The lack of sufficiently wide base for the bead in the suspended position.</p>
Full article ">
16 pages, 2895 KiB  
Article
Accuracy Assessment of NOAA IMS 4 km Products on the Tibetan Plateau with Landsat-8 OLI Images
by Duo Chu
Atmosphere 2024, 15(10), 1234; https://doi.org/10.3390/atmos15101234 (registering DOI) - 15 Oct 2024
Viewed by 300
Abstract
The NOAA IMS (Interactive Multisensor Snow and Ice Mapping System) is a blended snow and ice product based on active and passive satellite sensors, ground observation, and other auxiliary information, providing the daily cloud-free snow cover extent in the Northern Hemisphere (NH) and [...] Read more.
The NOAA IMS (Interactive Multisensor Snow and Ice Mapping System) is a blended snow and ice product based on active and passive satellite sensors, ground observation, and other auxiliary information, providing the daily cloud-free snow cover extent in the Northern Hemisphere (NH) and having great application potential in snow cover monitoring and research in the Tibetan Plateau (TP). However, accuracy assessment of products is crucial for various aspects of applications. In this study, Landsat-8 OLI images were used to evaluate and validate the accuracy of IMS products in snow cover monitoring on the TP. The results show that (1) average overall accuracy of IMS 4 km products is 76.0% and average mapping accuracy is 88.3%, indicating that IMS 4 km products are appropriate for large-scale snow cover monitoring on the TP. (2) IMS 4 km products tend to overestimate actual snow cover on the TP, with an average commission rate of 45.4% and omission rate of 11.7%, and generally present that the higher the proportion of snow-covered area, the lower the probability of omission rate and the higher the probability of commission rate. (3) Mapping accuracy of IMS 4 km snow cover on the TP generally is higher at the high altitudes, and commission and omission errors increase with the decrease of elevation. (4) Compared with less regional representativeness of ground observations, the spatial characteristics of snow cover based on high-resolution remote sensing data are much more detailed, and more reliable verification results can be obtained. (5) In addition to commission and omission error metrics, the overall accuracy and mapping accuracy based on the reference image instead of classified image can better reveal the general monitoring accuracy of IMS 4 km products on the TP area. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

Figure 1
<p>Study area and location of Landsat-8 OLI images selected for validation. The background image is snow cover extent from IMS 4 km products on 25 December 2018.</p>
Full article ">Figure 2
<p>IMS 4 km snow cover extent in the NH on 25 February 2018.</p>
Full article ">Figure 3
<p>First 10 Landsat-8 band 6-3-2 composite images (<b>a1</b>,<b>a2</b>) and corresponding snow cover maps at 1 km spatial resolution of Landsat-8 (<b>b1</b>,<b>b2</b>) and IMS (<b>c1</b>,<b>c2</b>) in row. The date, path, and row of Landsat-8 OLI images are shown at the top of images.</p>
Full article ">Figure 4
<p>Overall accuracy of IMS 4 km products on the TP based on Landsat-8 images.</p>
Full article ">Figure 5
<p>Omission and commission errors of IMS 4 km products on the TP based on Landsat-8 images.</p>
Full article ">Figure 6
<p>(<b>a</b>) Spatial distribution of accuracy errors of IMS 4 km products on 8 January 2018. (<b>b</b>) Accuracy errors of IMS 4 km products along with elevations on 8 January 2018.</p>
Full article ">Figure 7
<p>(<b>a</b>) Spatial distribution of accuracy errors of IMS 4 km products on 19 January 2017. (<b>b</b>) Accuracy errors of IMS 4 km products along with elevations on 19 January 2017.</p>
Full article ">
11 pages, 4909 KiB  
Communication
A Kernel-Based Calibration Algorithm for Chromatic Confocal Line Sensors
by Ming Qin, Xiao Xiong, Enqiao Xiao, Min Xia, Yimeng Gao, Hucheng Xie, Hui Luo and Wenhao Zhao
Sensors 2024, 24(20), 6649; https://doi.org/10.3390/s24206649 - 15 Oct 2024
Viewed by 349
Abstract
In chromatic confocal line sensors, calibration is usually divided into peak extraction and wavelength calibration. In previous research, the focus was mainly on peak extraction. In this paper, a kernel-based algorithm is proposed to deal with wavelength calibration, which corresponds to the mapping [...] Read more.
In chromatic confocal line sensors, calibration is usually divided into peak extraction and wavelength calibration. In previous research, the focus was mainly on peak extraction. In this paper, a kernel-based algorithm is proposed to deal with wavelength calibration, which corresponds to the mapping relationship between peaks (i.e., the wavelengths) in image space and profiles in physical space. The primary component of the mapping function is depicted using polynomial basis functions, which are distinguished along various dispersion axes. Considering the unknown distortions resulting from field curvature, sensor fabrication and assembly, and even the inherent complexity of dispersion, a typical kernel trick-based nonparametric function element is introduced here, predicated on the notion that similar processes conducted on the same sensor yield comparable distortions.To ascertain the performance with and without the kernel trick, we carried out wavelength calibration and groove fitting on a standard groove sample processed via glass grinding and with a reference depth of 66.14 μm. The experimental results show that depths calculated by the kernel-based calibration algorithm have higher accuracy and lower uncertainty than those ascertained using the conventional polynomial algorithm. As such, this indicates that the proposed algorithm provides effective improvements. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Coordinate systems in image and physical spaces. Peaks and profiles are distributed in image and physical spaces, respectively.</p>
Full article ">Figure 2
<p>Measurement case in which a standard groove sample is fixed on a sliding platform, which is capable of translation in the <span class="html-italic">Z</span> direction.</p>
Full article ">Figure 3
<p>A raw image of a standard groove sample from the LSCF1000 and the corresponding peak extraction result using the centroid method. (<b>a</b>) A raw image of a standard groove sample. (<b>b</b>) Peak extraction on the raw image.</p>
Full article ">Figure 3 Cont.
<p>A raw image of a standard groove sample from the LSCF1000 and the corresponding peak extraction result using the centroid method. (<b>a</b>) A raw image of a standard groove sample. (<b>b</b>) Peak extraction on the raw image.</p>
Full article ">Figure 4
<p>Data collection for transformation in the <span class="html-italic">Z</span> direction. The standard flat mirror parallel to the <math display="inline"><semantics> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </semantics></math> plane is fixed on a sliding platform capable of translation in the <span class="html-italic">Z</span> direction. The wavelength <math display="inline"><semantics> <mi>λ</mi> </semantics></math> on the <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>-th column refers to the peak extracted from the image space, and the corresponding displacement <math display="inline"><semantics> <mover accent="true"> <mi>z</mi> <mo>¯</mo> </mover> </semantics></math> is obtained from the sliding platform.</p>
Full article ">Figure 5
<p>Data collection for transformation in the <span class="html-italic">X</span> direction. The standard flat mirror is fixed on a rotation device. The <span class="html-italic">X</span>-tilt angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math> of the standard flat mirror is obtained from the rotation device, and the corresponding slope <span class="html-italic">k</span> is calculated after transformation in the <span class="html-italic">Z</span> direction.</p>
Full article ">Figure 6
<p>An example of groove depth calculation, which can be divided into two sequential subprocesses: (<b>a</b>) wavelength calibration and (<b>b</b>) groove fitting. The groove depth computed from the profile is about 66.24 μm.</p>
Full article ">Figure 6 Cont.
<p>An example of groove depth calculation, which can be divided into two sequential subprocesses: (<b>a</b>) wavelength calibration and (<b>b</b>) groove fitting. The groove depth computed from the profile is about 66.24 μm.</p>
Full article ">Figure 7
<p>An ideal groove sample in physical space. The upper and lower boundaries coincide with two parallel straight dashed lines, the normal vector of which is <math display="inline"><semantics> <msup> <mi>W</mi> <mo>′</mo> </msup> </semantics></math>.</p>
Full article ">Figure 8
<p>Comparison of deviation from true depth between polynomial and kernel-based algorithms.</p>
Full article ">
16 pages, 1563 KiB  
Article
Tree Species Classification from UAV Canopy Images with Deep Learning Models
by Yunmei Huang, Botong Ou, Kexin Meng, Baijian Yang, Joshua Carpenter, Jinha Jung and Songlin Fei
Remote Sens. 2024, 16(20), 3836; https://doi.org/10.3390/rs16203836 (registering DOI) - 15 Oct 2024
Viewed by 361
Abstract
Forests play a critical role in the provision of ecosystem services, and understanding their compositions, especially tree species, is essential for effective ecosystem management and conservation. However, identifying tree species is challenging and time-consuming. Recently, unmanned aerial vehicles (UAVs) equipped with various sensors [...] Read more.
Forests play a critical role in the provision of ecosystem services, and understanding their compositions, especially tree species, is essential for effective ecosystem management and conservation. However, identifying tree species is challenging and time-consuming. Recently, unmanned aerial vehicles (UAVs) equipped with various sensors have emerged as a promising technology for species identification due to their relatively low cost and high spatial and temporal resolutions. Moreover, the advancement of various deep learning models makes remote sensing based species identification more a reality. However, three questions remain to be answered: first, which of the state-of-the-art models performs best for this task; second, which is the optimal season for tree species classification in a temperate forest; and third, whether a model trained in one season can be effectively transferred to another season. To address these questions, we focus on tree species classification by using five state-of-the-art deep learning models on UAV-based RGB images, and we explored the model transferability between seasons. Utilizing UAV images taken in the summer and fall, we captured 8799 crown images of eight species. We trained five models using summer and fall images and compared their performance on the same dataset. All models achieved high performances in species classification, with the best performance on summer images, with an average F1-score was 0.96. For the fall images, Vision Transformer (ViT), EfficientNetB0, and YOLOv5 achieved F1-scores greater than 0.9, outperforming both ResNet18 and DenseNet. On average, across the two seasons, ViT achieved the best accuracy. This study demonstrates the capability of deep learning models in forest inventory, particularly for tree species classification. While the choice of certain models may not significantly affect performance when using summer images, the advanced models prove to be a better choice for fall images. Given the limited transferability from one season to another, further research is required to overcome the challenge associated with transferability across seasons. Full article
(This article belongs to the Special Issue LiDAR Remote Sensing for Forest Mapping)
Show Figures

Figure 1

Figure 1
<p>Work pipeline for tree species classification with UAV images and deep learning models.</p>
Full article ">Figure 2
<p>Study area and label examples. (<b>a</b>) Martell Forest in Indiana, USA; (<b>b</b>) Canopy image of a black cherry (<span class="html-italic">Prunus serotina</span>) plantation; (<b>c</b>) Label examples of the black cherry plantation (all crowns were identified with bounding boxes).</p>
Full article ">Figure 3
<p>Examples of seasonal differences among eight species. These crown images are cropped from orthophotos of our study area and show the crown variation of the same trees.</p>
Full article ">Figure 4
<p>F1-scores of five models for summer and seasons.</p>
Full article ">Figure A1
<p>Number of images VS F1-score on summer datasets for four species with ResNet18. Due to the limits of numbers of images, we selected four species and the number of training images starts from 60 to 280, with 20 as an increment. In each training session, all four classes have an equal amount of training images and the same test dataset. Thus, we trained ResNet18 12 times with various numbers of images. When the number of images ranges from 60 to 180, the increment of accuracy is faster than the further part image numbers ranging from 200 to 280. For the experiments on images with the numbers 260 and 280, their change in accuracy was unremarkable. Hence, from our observation, the number of images impacts the model’s classification accuracy, and after training images reach a certain amount, the influences decrease.</p>
Full article ">Figure A2
<p>Number of images VS F1-scores with ResNet18 on two datasets for eight species. Different shapes of points stand for different seasons. The round shape points stand for the summer dataset, the squares belong to fall.</p>
Full article ">
19 pages, 6481 KiB  
Article
Parallel Lossless Compression of Raw Bayer Images on FPGA-Based High-Speed Camera
by Žan Regoršek, Aleš Gorkič and Andrej Trost
Sensors 2024, 24(20), 6632; https://doi.org/10.3390/s24206632 - 15 Oct 2024
Viewed by 335
Abstract
Digital image compression is applied to reduce camera bandwidth and storage requirements, but real-time lossless compression on a high-speed high-resolution camera is a challenging task. The article presents hardware implementation of a Bayer colour filter array lossless image compression algorithm on an FPGA-based [...] Read more.
Digital image compression is applied to reduce camera bandwidth and storage requirements, but real-time lossless compression on a high-speed high-resolution camera is a challenging task. The article presents hardware implementation of a Bayer colour filter array lossless image compression algorithm on an FPGA-based camera. The compression algorithm reduces colour and spatial redundancy and employs Golomb–Rice entropy coding. A rule limiting the maximum code length is introduced for the edge cases. The proposed algorithm is based on integer operators for efficient hardware implementation. The algorithm is first verified as a C++ model and later implemented on AMD-Xilinx Zynq UltraScale+ device using VHDL. An effective tree-like pipeline structure is proposed to concatenate codes of compressed pixel data to generate a bitstream representing data of 16 parallel pixels. The proposed parallel compression achieves up to 56% reduction in image size for high-resolution images. Pipelined implementation without any state machine ensures operating frequencies up to 320 MHz. Parallelised operation on 16 pixels effectively increases data throughput to 40 Gbit/s while keeping the total memory requirements low due to real-time processing. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Block diagram of lossless compression on FPGA-based camera.</p>
Full article ">Figure 2
<p>Typical Bayer CFA pattern with twice as many green as red and blue receptors.</p>
Full article ">Figure 3
<p>An example sequence of 128-bit sequences representing 16 pixels each, clocked in from the image sensor one after another to the two line buffers. The image is 5120 pixels wide, therefore requiring <math display="inline"><semantics> <mrow> <mn>5120</mn> <mo>/</mo> <mn>16</mn> <mo>=</mo> <mn>320</mn> </mrow> </semantics></math> clock cycles to fill one FIFO line buffer.</p>
Full article ">Figure 4
<p>Buffers’ state at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>320</mn> </mrow> </semantics></math>. At <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>321</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>322</mn> </mrow> </semantics></math>, four <span class="html-italic">Bayer elements</span> will be assembled from 16 pixels.</p>
Full article ">Figure 5
<p>Kodak 13 from KODAK dataset [<a href="#B40-sensors-24-06632" class="html-bibr">40</a>]: The content of the image is clearly recognisable in all three RGB colour channels (left). The details are much less pronounced after YCCC transformation (right). (<b>a</b>) Colour image and <span class="html-italic">Y</span> channel; (<b>b</b>) red and <span class="html-italic">Cd</span> channels; (<b>c</b>) green and <span class="html-italic">Cm</span> channels; (<b>d</b>) blue and <span class="html-italic">Co</span> channels.</p>
Full article ">Figure 6
<p>Image data distribution. Initially random distribution (<b>a</b>) is transformed by (<a href="#FD3-sensors-24-06632" class="html-disp-formula">3</a>)–(<a href="#FD5-sensors-24-06632" class="html-disp-formula">5</a>) to Laplace distribution (<b>b</b>), and by (<a href="#FD6-sensors-24-06632" class="html-disp-formula">6</a>) to exponentially decreasing monotonic distribution (<b>c</b>). (<b>a</b>): <span class="html-italic">x</span>; (<b>b</b>): <math display="inline"><semantics> <msub> <mi>x</mi> <mi>d</mi> </msub> </semantics></math>; (<b>c</b>): <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>Effect of <span class="html-italic">k</span> on code length of Golomb–Rice encoding scheme.</p>
Full article ">Figure 8
<p>Detailed schematic of complete compression algorithm.</p>
Full article ">Figure 9
<p>Example of parallelism in case of <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> parallel pixels supplied by the image sensor.</p>
Full article ">Figure 10
<p>Calculation of <span class="html-italic">k</span> as a pipeline. The same pipeline is implemented in parallel for 4 channels.</p>
Full article ">Figure 11
<p>Illustration of calculating <span class="html-italic">q</span> and <span class="html-italic">r</span> in C++ (<b>a</b>) and VHDL (<b>b</b>). (<b>a</b>) Example calculation of Golomb–Rice quotient and remainder; (<b>b</b>) hardware calculation of <span class="html-italic">q</span> (green) and <span class="html-italic">r</span> (blue) by bit-slicing and bit-shifting. In this case, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>17</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Generation of the final code.</p>
Full article ">Figure 13
<p>Pipelined bitstream generation on 16 parallel pixels.</p>
Full article ">Figure 14
<p>The 18 megapixel images car_01.png and oscilloscope.png with CR 2.26 and 1.80, respectively.</p>
Full article ">
42 pages, 631 KiB  
Review
Electromagnetic and Radon Earthquake Precursors
by Dimitrios Nikolopoulos, Demetrios Cantzos, Aftab Alam, Stavros Dimopoulos and Ermioni Petraki
Geosciences 2024, 14(10), 271; https://doi.org/10.3390/geosciences14100271 - 14 Oct 2024
Viewed by 624
Abstract
Earthquake forecasting is arguably one of the most challenging tasks in Earth sciences owing to the high complexity of the earthquake process. Over the past 40 years, there has been a plethora of work on finding credible, consistent and accurate earthquake precursors. This [...] Read more.
Earthquake forecasting is arguably one of the most challenging tasks in Earth sciences owing to the high complexity of the earthquake process. Over the past 40 years, there has been a plethora of work on finding credible, consistent and accurate earthquake precursors. This paper is a cumulative survey on earthquake precursor research, arranged into two broad categories: electromagnetic precursors and radon precursors. In the first category, methods related to measuring electromagnetic radiation in a wide frequency range, i.e., from a few Hz to several MHz, are presented. Precursors based on optical and radar imaging acquired by spaceborne sensors are also considered, in the broad sense, as electromagnetic. In the second category, concentration measurements of radon gas found in soil and air, or even in ground water after being dissolved, form the basis of radon activity precursors. Well-established mathematical techniques for analysing data derived from electromagnetic radiation and radon concentration measurements are also described with an emphasis on fractal methods. Finally, physical models of earthquake generation and propagation aiming at interpreting the foundation of the aforementioned seismic precursors, are investigated. Full article
(This article belongs to the Special Issue Precursory Phenomena Prior to Earthquakes (2nd Edition))
Back to TopTop