[go: up one dir, main page]

Next Issue
Volume 16, May-1
Previous Issue
Volume 16, April-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 16, Issue 8 (April-2 2024) – 166 articles

Cover Story (view full-size image): The integration of multi-satellite remote sensing and citizen science observations sets the stage for advanced monitoring of river ice dynamics in Alaska. This study leverages the Google Earth Engine platform to enhance the timeliness and accuracy of river ice observations, especially during critical freeze-up and breakup periods. By incorporating both high-resolution optical and radar data, our approach significantly improves the monitoring and analysis of river ice conditions. Furthermore, the inclusion of citizen science data provides essential ground-truth insights, notably enhancing the validation and interpretation of remote sensing products. Together, these methodologies not only refine ice monitoring technologies but also deepen our understanding of ice-induced hazards. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 7239 KiB  
Article
Focusing Algorithm of Range Profile for Plasma-Sheath-Enveloped Target
by Fangfang Shen, Xuyang Chen, Bowen Bai, Yanming Liu, Xiaoping Li and Zherui Zhang
Remote Sens. 2024, 16(8), 1475; https://doi.org/10.3390/rs16081475 - 22 Apr 2024
Viewed by 1095
Abstract
In this paper, a one-dimensional (1-D) range profile of the hypersonic target enveloped by a plasma sheath is investigated. Firstly, the non-uniform property of the plasma sheath is studied and its impact on the wideband electromagnetic (EM) wave is analyzed. A wideband radar [...] Read more.
In this paper, a one-dimensional (1-D) range profile of the hypersonic target enveloped by a plasma sheath is investigated. Firstly, the non-uniform property of the plasma sheath is studied and its impact on the wideband electromagnetic (EM) wave is analyzed. A wideband radar echo model for the plasma-sheath-enveloped hypersonic target is constructed. Then, by exploiting the relationship among the incident depth, reflection intensity, and plasma velocity, it reveals that distinct scatter points in various areas of the target will suffer from varying reflection intensity and coupled velocity, leading to severe defocusing in the range profile. To tackle this issue, a novel focusing algorithm combing the Fractional Fourier Transform (FRFT) with the CLEAN technique is developed, which independently calculates the coupled plasma velocity and compensates for the phase error via a series of iterative procedures. Finally, the influence of the plasma sheath on the 1-D range profile and the effectiveness of the proposed focusing algorithm are validated through simulations. Full article
Show Figures

Figure 1

Figure 1
<p>Geometric model of the layered plasma.</p>
Full article ">Figure 2
<p>Geometry model of the RAM-C and its flow-field data (50 km, 20 Ma). (<b>a</b>) Geometry model of the blunt cone. (<b>b</b>) Electron density distribution. (<b>c</b>) Velocity field distribution. (<b>d</b>) Schematic diagram of reflection depth.</p>
Full article ">Figure 2 Cont.
<p>Geometry model of the RAM-C and its flow-field data (50 km, 20 Ma). (<b>a</b>) Geometry model of the blunt cone. (<b>b</b>) Electron density distribution. (<b>c</b>) Velocity field distribution. (<b>d</b>) Schematic diagram of reflection depth.</p>
Full article ">Figure 3
<p>Schematic of the wideband radar echo model of the plasma-sheath-enveloped target.</p>
Full article ">Figure 4
<p>Comparison of the range profiles under different velocities. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> </semantics></math> = 5100 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> </semantics></math> = 6800 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Geometric model of the RAM-C target.</p>
Full article ">Figure 6
<p>Range profile of the RAM-C target in the presence of plasma sheath (50 km, 20 Ma). (<b>a</b>) Range profile (no plasma sheath). (<b>b</b>) Range profile (plasma sheath). (<b>c</b>) Original RD ISAR. (<b>d</b>) ISAR defocused image. (<b>e</b>) FRFT energy distribution. (<b>f</b>) Profile view of the FRFT with fixed α.</p>
Full article ">Figure 7
<p>Range profile and ISAR image of the plasma-sheath-enveloped target. (<b>a</b>) Range profile. (<b>b</b>) The reconstructed ISAR image.</p>
Full article ">Figure 8
<p>Signal separation in fractional Fourier domain. (<b>a</b>) First signal distribution on the plane. (<b>b</b>) Profile view. (<b>c</b>) Fifth signal distribution on the plane. (<b>d</b>) Profile view. (<b>e</b>) Ninth signal distribution on the plane. (<b>f</b>) Profile view.</p>
Full article ">Figure 9
<p>Reconstructed ISAR images. (<b>a</b>) RD (50 km, 15 Ma). (<b>b</b>) Proposed algorithm (50 km, 15 Ma). (<b>c</b>) RD (40 km, 15 Ma). (<b>d</b>) Proposed algorithm (40 km, 15 Ma). (<b>e</b>)RD (30 km, 15 Ma). (<b>f</b>) Proposed algorithm (30 km, 15 Ma). (<b>g</b>) RD (30 km, 25 dB). (<b>h</b>) Proposed algorithm (30 km, 25 Ma).</p>
Full article ">Figure 9 Cont.
<p>Reconstructed ISAR images. (<b>a</b>) RD (50 km, 15 Ma). (<b>b</b>) Proposed algorithm (50 km, 15 Ma). (<b>c</b>) RD (40 km, 15 Ma). (<b>d</b>) Proposed algorithm (40 km, 15 Ma). (<b>e</b>)RD (30 km, 15 Ma). (<b>f</b>) Proposed algorithm (30 km, 15 Ma). (<b>g</b>) RD (30 km, 25 dB). (<b>h</b>) Proposed algorithm (30 km, 25 Ma).</p>
Full article ">
22 pages, 24080 KiB  
Article
Kinematic and Dynamic Structure of the 18 May 2020 Squall Line over South Korea
by Wishnu Agum Swastiko, Chia-Lun Tsai, Seung Hee Kim and GyuWon Lee
Remote Sens. 2024, 16(8), 1474; https://doi.org/10.3390/rs16081474 - 22 Apr 2024
Viewed by 1316
Abstract
The diagonal squall line that passed through the Korean Peninsula on the 18 May 2020 was examined using wind data retrieved from multiple Doppler radar synthesis focusing on its kinematic and dynamic aspects. The low-level jet, along with warm and moist air in [...] Read more.
The diagonal squall line that passed through the Korean Peninsula on the 18 May 2020 was examined using wind data retrieved from multiple Doppler radar synthesis focusing on its kinematic and dynamic aspects. The low-level jet, along with warm and moist air in the lower level, served as the primary source of moisture supply during the initiation and formation process. The presence of a cold pool accompanying the squall line played a role in retaining moisture at the surface. As the squall line approached the Korean Peninsula, the convective bands in the northern segment (NS) and southern segment (SS) of the squall line exhibited distinct evolutionary patterns. The vertical wind shear in the NS area was more pronounced compared to that in the SS. The ascending inflow associated with the tilted updraft in the NS reached an altitude of 7 km, whereas it was only up to 4 km in the SS. The difference was caused by the strong descending rear flow, which obstructed the ascending inflow and let to significant updraft in the SS. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The domain of this study with topographic features of the Korean Peninsula is divided based on the developing stage of two segments (northern and southern segment; herein NS and SS). The red dots represent Incheon and Seoul Automatic Weather Stations (AWSs). The red triangles are for Baekryeongdo and Osan (47102 and 47122, respectively) represent the locations of the upper air station sites. The squares in the bold black boxes represent the developing stages according to the location of two segments with different geographical locations: the ocean (I, IV), the coast (II, V), and the inland (III, VI).</p>
Full article ">Figure 2
<p>Radar reflectivity composited at 2 km altitudes is shown in (<b>a</b> <b>h</b>) with 1 h intervals from 0400 to 1100 UTC 18 May 2020. Color shading represents reflectivity (dBZ). NS (SS) is abbreviation for the northern (southern) segment. The dashed-line boxes which consist of (I, IV), (II, V), and (III, VI) represent stages of different geographical locations (ocean, coast, and inland, respectively).</p>
Full article ">Figure 3
<p>(<b>a</b>) Track of the leading edge of the squall line from 0400 to 0920 UTC 18 May 2020. The thick arrow depicts the movement vectors of the leading edge. The numbers indicate the movement speed of the leading edge (m s<sup>−1</sup>). (<b>b</b>) The composited reflectivity larger than 35 dBZ is represented by color-shaded contours. The color bar shows the time period of reflectivity larger than 35 dBZ, which is similar to the time period of the leading edge. (<b>c</b>) Time series of mean of reflectivity larger than 35 dBZ is represented by the black solid line, but the blue dashed line shows the mean area over 35 dBZ (km<sup>2</sup>).</p>
Full article ">Figure 4
<p>Synoptic analysis chart obtained from KMA at 0000 UTC on 18 May 2020: (<b>a</b>) surface weather chart; (<b>b</b>) 850 hPa wind vectors, convergence (10<sup>−6</sup> s<sup>−1</sup>, shaded) and isotach (&gt;25 knots, barbs); (<b>c</b>) 500 hPa geopotential height (m, black line), temperature (C, red line), and relative vorticity (10<sup>−5</sup> s<sup>−1</sup>, shaded); (<b>d</b>) storm-relative helicity (SRH) at surface to 3 km (m<sup>2</sup> s<sup>−2</sup>, shaded). The red arrows and purple shading in (<b>a</b>) represent the upper- and lower-level jets, respectively.</p>
Full article ">Figure 5
<p>Skew T-log p diagram for (<b>a</b>) Baekryeongdo (47102) and (<b>b</b>) Osan (47122) sites obtained from Korea Meteorological Administration (KMA) at 0000 UTC on 18 May 2020.</p>
Full article ">Figure 6
<p>Horizontal distribution perturbation of equivalent potential temperature (<math display="inline"><semantics> <mrow> <mi>θ</mi> <msub> <mo>’</mo> <mi>e</mi> </msub> </mrow> </semantics></math>) (K, color shaded), reflectivity (larger than 35 dBZ, green line contour), and leading edge (thick black dashed line) is shown in (<b>a</b> <b>h</b>) with 1-h intervals from 0400 to 1100 UTC based on AWS observation data.</p>
Full article ">Figure 7
<p>Time series of Incheon and Seoul AWS station data between 0300 and 1100 UTC: surface wind (m s<sup>−1</sup>, barbs), hourly precipitation (mm h<sup>−1</sup>), equivalent potential temperature (<math display="inline"><semantics> <mrow> <mi>θ</mi> <msub> <mo>’</mo> <mi>e</mi> </msub> </mrow> </semantics></math>) (K), and pressure perturbation (p′) (hPa). The thick black dashed (solid) line represents each parameter at Incheon (Seoul) AWS, respectively. The gray and black arrows indicate the onset of precipitation at the Incheon and Seoul AWS stations, respectively.</p>
Full article ">Figure 8
<p>Vertical wind shear at near-surface to 3 km altitudes (m s<sup>−1</sup>) is shown in (<b>a</b> <b>f</b>) with 1 h intervals starting from 0400 to 1100 UTC using WISSDOM. The color shading represents the magnitude of vertical wind shear. The vector represents the shear vector. The red dashed line in each figure represents the leading edge of the squall line. The dashed-line boxes which consist of (I, IV), (II, V), and (III, VI) represent domains of different geographical locations (ocean, coast, and inland, respectively).</p>
Full article ">Figure 9
<p>Reflectivity (dBZ, shaded) and storm-relative winds (<math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mi>r</mi> </msub> <mo>,</mo> <msub> <mi>v</mi> <mi>r</mi> </msub> </mrow> </semantics></math>) (m s<sup>−1</sup>, vectors) at 2 km in the NS: (<b>a</b>) 0500 UTC (domain I), (<b>b</b>) 0620 UTC (domain II), and (<b>c</b>) 0920 UTC (domain III). The thin black solid line is the area of the convective band used for the cross-section.</p>
Full article ">Figure 10
<p>Vertical structure of reflectivity (dBZ) (top row) and storm-relative winds (<math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mi>r</mi> </msub> <mo>,</mo> <mi>w</mi> </mrow> </semantics></math>) (m s<sup>−1</sup>) (bottom row) in the NS is shown at: (<b>a</b>,<b>d</b>) 0500 UTC (domain I), (<b>b</b>,<b>e</b>) 0620 UTC(domain II), and (<b>c</b>,<b>f</b>) 0920 UTC (domain III). The black shading in (<b>a</b>–<b>f</b>) represents topography. The thick green solid line represents the reflectivity contour of 35 dBZ.</p>
Full article ">Figure 11
<p>As in <a href="#remotesensing-16-01474-f009" class="html-fig">Figure 9</a>, but for (<b>a</b>) 0530 UTC (domain IV), (<b>b</b>) 0740 UTC (domain V), and (<b>c</b>) 1050 UTC (domain VI) of 18 May 2020 in the SS, respectively. The thin black solid line is the area of the convective band used for the cross-section.</p>
Full article ">Figure 12
<p>As in <a href="#remotesensing-16-01474-f010" class="html-fig">Figure 10</a>, but for (<b>a</b>) 0530 UTC (domain IV), (<b>b</b>) 0740 UTC (domain V), and (<b>c</b>) 1050 UTC (domain VI) of 18 May 2020 in the SS, respectively. The black shading in (<b>a</b> <b>f</b>) represents topography. The thick green solid line represents the reflectivity contour of 35 dBZ.</p>
Full article ">Figure 13
<p>Horizontal distribution of vertical motion (m s<sup>−1</sup>, shaded) and relative vorticity (10<sup>−3</sup> s<sup>−1</sup>, contour) in the NS: (<b>a</b>) 0500 UTC (domain I), (<b>b</b>) 0620 UTC (domain II), and (<b>c</b>) 0920 UTC (domain III). The thin black solid (dashed) line contour represents positive (negative) vorticity. Red (blue) shading in (<b>a−c</b>) denotes updraft (downdraft). The thick black solid line represents the leading edge. The thick green line contour represents reflectivity of 35 dBZ. The thin black solid line is the area of the convective band used for cross-section.</p>
Full article ">Figure 14
<p>Cross-section of vertical motion (m s<sup>−1</sup>, shaded) and relative vorticity (10<sup>−3</sup> s<sup>−1</sup>, contour) in the NS: (<b>a</b>) 0500 UTC (domain I), (<b>b</b>) 0620 UTC (domain II), and (<b>c</b>) 0920 UTC (domain III). Black solid (dashed) line contour represents positive (negative) vorticity. Red (blue) shading in (<b>a−c</b>) denotes updraft (downdraft). The black shading in each figure denotes the topography. The thick green solid line contour represents the reflectivity of 35 dBZ.</p>
Full article ">Figure 15
<p>As in <a href="#remotesensing-16-01474-f013" class="html-fig">Figure 13</a>, but for (<b>a</b>) 0530 UTC (domain IV), (<b>b</b>) 0740 UTC (domain V), and (<b>c</b>) 1050 UTC (domain VI) of 18 May 2020 in the SS, respectively. The thin black solid (dashed) line contour represents positive (negative) vorticity. Red (blue) shading in (<b>a−c</b>) denotes updraft (downdraft). The thick black solid line represents the leading edge. The thick green line contour represents reflectivity of 35 dBZ. The thin black solid line is the area of the convective band used for cross-section.</p>
Full article ">Figure 16
<p>As in <a href="#remotesensing-16-01474-f014" class="html-fig">Figure 14</a>, but for (<b>a</b>) 0530 UTC (domain IV), (<b>b</b>) 0740 UTC (domain V), and (<b>c</b>) 1050 UTC (domain VI) of 18 May 2020 in the SS, respectively. Black solid (dashed) line contour represents positive (negative) vorticity. Red (blue) shading in (<b>a−c</b>) denotes updraft (downdraft). The black shading in each figure denotes the topography. The thick green solid line contour represents the reflectivity of 35 dBZ.</p>
Full article ">Figure 17
<p>Schematic diagram of the mechanism for the convective band in the northern and southern segments during mature stages (domains II (NS) and V (SS)). The blue shading on the bottom indicates the cold pool. The green arrows represent the low-level jet. The blue and red arrows represent the vertical motion (blue: downdraft and red: updraft). The center region shaded in orange and yellow indicates the various intensities of the radar band (35 and 40 dBZ). The black thick arrow represents front-to-rear and rear-to-front flow. NS (SS) is abbreviation for the northern (southern) segment.</p>
Full article ">
19 pages, 6152 KiB  
Article
Wind Profile Reconstruction Based on Convolutional Neural Network for Incoherent Doppler Wind LiDAR
by Jiawei Li, Chong Chen, Yuli Han, Tingdi Chen, Xianghui Xue, Hengjia Liu, Shuhua Zhang, Jing Yang and Dongsong Sun
Remote Sens. 2024, 16(8), 1473; https://doi.org/10.3390/rs16081473 - 22 Apr 2024
Viewed by 1652
Abstract
The rapid development of artificial intelligence (AI) and deep learning has revolutionized the field of data analysis in recent years, including signal data acquired by remote sensors. Light Detection and Ranging (LiDAR) technology is widely used in atmospheric research for measuring various atmospheric [...] Read more.
The rapid development of artificial intelligence (AI) and deep learning has revolutionized the field of data analysis in recent years, including signal data acquired by remote sensors. Light Detection and Ranging (LiDAR) technology is widely used in atmospheric research for measuring various atmospheric parameters. Wind measurement using LiDAR data has traditionally relied on the spectral centroid (SC) algorithm. However, this approach has limitations in handling LiDAR data, particularly in low signal-to-noise ratio (SNR) regions. To overcome these limitations, this study leverages the capabilities of customized deep-learning techniques to achieve accurate wind profile reconstruction. The study uses datasets obtained from the European Centre for Medium Weather Forecasting (ECMWF) Reanalysis v5 (ERA5) and the mobile Incoherent Doppler LiDAR (ICDL) system constructed by the University of Science and Technology of China. We present a simulation-based approach for generating wind profiles from the statistical data and the associated theoretical calculations. Whereafter, our team constructed a convolutional neural network (CNN) model based on the U-Net architecture to replace the SC algorithm for LiDAR data post-processing. The CNN-generated results are evaluated and compared with the SC results and the ERA5 data. This study highlights the potential of deep learning-based techniques in atmospheric research and their ability to provide more accurate and reliable results. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Mobile ICDL system developed by USTC.</p>
Full article ">Figure 2
<p>LiDAR measured wind profile vs. background wind profile (east–west orientation) of 20 October 2019, at 10 p.m.</p>
Full article ">Figure 3
<p>(<b>a</b>) Altitude vs. amplitude of the wind perturbation; (<b>b</b>) altitude vs. wind speed error of the LiDAR detection in logarithmic scale.</p>
Full article ">Figure 4
<p>Three-channeled input of the CNN (simulated LiDAR signal). <b>Top</b>: Raw data scatter plot (spectral profile) of the simulated LiDAR signal. <b>Middle</b>: SC wind profile calculated based on the simulated LiDAR signal. <b>Bottom</b>: Spline Transformer wind profile calculated based on the simulated LiDAR signal.</p>
Full article ">Figure 5
<p>Proposed Customized U-net Structure.</p>
Full article ">Figure 6
<p>LiDAR data spectral profile of the real LiDAR signal as the input of the CNN.</p>
Full article ">Figure 7
<p><b>Top</b>: Output of the CNN. <b>Bottom</b>: The ground truth label.</p>
Full article ">Figure 8
<p>Wind profile plots of average performance of the validation set: (<b>a</b>) Argmax plot of the CNN against ground truth; (<b>b</b>) SC plot of the CNN against ground truth; (<b>c</b>) SC plot of raw data against ground truth; (<b>d</b>) Gaussian-smoothed SC plot of raw data against ground truth.</p>
Full article ">Figure 8 Cont.
<p>Wind profile plots of average performance of the validation set: (<b>a</b>) Argmax plot of the CNN against ground truth; (<b>b</b>) SC plot of the CNN against ground truth; (<b>c</b>) SC plot of raw data against ground truth; (<b>d</b>) Gaussian-smoothed SC plot of raw data against ground truth.</p>
Full article ">Figure 9
<p>Regression analysis of average performance: (<b>a</b>) Regression analysis of GSSC against ground truth; (<b>b</b>) regression analysis of SCCNN against ground truth.</p>
Full article ">Figure 10
<p>East–west wind profile measurement of the GSSC vs. the SCCNN on 31 October 2019, at Kolar, Xinjiang (positive wind speed corresponds to east wind). The horizontal axis shows the timeline in HH:MM format, and the color indicates the wind speed and direction. The white blank region indicates the LiDAR signal is below sufficient SNR threshold for SC calculation. <b>Top:</b> Wind profile measurement of the GSSC. <b>Bottom:</b> Wind profile measurement of the SCCNN.</p>
Full article ">Figure 11
<p>Wind profile plots of east–west orientation at 2:00 a.m. on 1 November 2019, at Kolar, Xinjiang: (<b>a</b>) wind profile plots of SCCNN vs. GSSC vs. ERA5; (<b>b</b>) background wind profile plots of SCCNN vs. GSSC vs. ERA5.</p>
Full article ">
21 pages, 2170 KiB  
Article
ITS Efficiency Analysis for Multi-Target Tracking in a Clutter Environment
by Zvonko Radosavljević, Dejan Ivković and Branko Kovačević
Remote Sens. 2024, 16(8), 1471; https://doi.org/10.3390/rs16081471 - 22 Apr 2024
Cited by 1 | Viewed by 1182
Abstract
The Integrated Track Splitting (ITS) is a multi-scan algorithm for target tracking in a cluttered environment. The ITS filter models each track as a set of mutually exclusive components, usually in the form of a Gaussian Mixture. The purpose of this research is [...] Read more.
The Integrated Track Splitting (ITS) is a multi-scan algorithm for target tracking in a cluttered environment. The ITS filter models each track as a set of mutually exclusive components, usually in the form of a Gaussian Mixture. The purpose of this research is to determine the limits of the ‘endurance’ of target tracking of the known ITS algorithm by analyzing the impact of target detection probability. The state estimate and the a-posteriori probability of component existence are computed recursively from the target existence probability, which may be used as a track quality measure for false track discrimination (FTD). The target existence probability is also calculated and used for track maintenance and track output. This article investigates the limits of the effectiveness of ITS multi-target tracking using the method of theoretical determination of the dependence of the measurements likelihood ratio on reliable detection and then practical experimental testing. Numerical simulations of the practical application of the proposed model were performed in various probabilities of target detection and dense clutter environments. Additionally, the effectiveness of the proposed algorithm in combination with filters for various types of maneuvers using Interacting Multiple Model ITS (IMMITS) algorithms was comparatively analyzed. The extensive numerical simulation (which assumes both straight and maneuvering targets) has shown which target tracking limits can be performed within different target detection probabilities and clutter densities. The simulations confirmed the derived theoretical limits of the tracking efficiency of the ITS algorithm up to a detection probability of 0.6, and compared to the IMMITS algorithm up to 0.4 in the case of target maneuvers and dense clutter environments. Full article
Show Figures

Figure 1

Figure 1
<p>Target existence probability vs. probability of target detection diagram.</p>
Full article ">Figure 2
<p>Single target scenario, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Single target scenario, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>2</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>CTT diagrams for single target scenario, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>CTT diagrams for single target scenario, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>2</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>RMSE of position for single target scenario, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>RMSE of position for single target scenario, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>2</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Multi target scenario, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Multi target scenario, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>2</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>CTT diagrams for multi-target scenario (ITS) <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>CTT diagrams for multi-target scenario (ITS), <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>2</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>CTT diagrams for multi-target scenario (IMMITS), <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>CTT diagrams for multi-target scenario (IMMITS), <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>2</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>RMSE of position diagrams for multi-target scenario (ITS), <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>RMSE of position diagrams for multi-target scenario (ITS), <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>2</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>RMSE of position diagrams for multi-target scenario (IMMITS), <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>RMSE of position diagrams for multi-target scenario (IMMITS), <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>2</mn> <mo>·</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">
19 pages, 6517 KiB  
Article
Concept of Spaceborne Ocean Microwave Dual-Function Integrated Sensor for Wind and Wave Measurement
by Hang Li, Wenkang Liu, Guangcai Sun, Changhong Chen, Mengdao Xing, Zhenhua Zhang and Jie Zhang
Remote Sens. 2024, 16(8), 1472; https://doi.org/10.3390/rs16081472 - 21 Apr 2024
Cited by 1 | Viewed by 1143
Abstract
Dedicated to synchronously acquiring large-area, high-precision, and multi-scale ocean wind and wave information, a novel concept of a spaceborne ocean microwave dual-function integrated sensor is proposed in this paper. It integrates the functions of a scatterometer and SAR by sharing a single phased-array [...] Read more.
Dedicated to synchronously acquiring large-area, high-precision, and multi-scale ocean wind and wave information, a novel concept of a spaceborne ocean microwave dual-function integrated sensor is proposed in this paper. It integrates the functions of a scatterometer and SAR by sharing a single phased-array antenna. An overview of the scientific requirements and motivations for the sensor are outlined firstly. In order to fulfill the observation requirements of both the functions, the constraints on the system parameters such as frequency, antenna size, and incidence angle are analyzed. Then, the selection principles of these parameters are discussed within the limitations of antenna area, bandwidth, available time, and cost. Additionally, the constraints on the time sequence of transmitting and receiving pulses are derived to ensure that there is no conflict when the two functions operate simultaneously. Subsequently, a method for jointly designing the pulse repetition frequency (PRF) of both the functions is introduced, along with zebra maps to verify its effectiveness. At the end of the paper, the system and performance parameters of the sensor are given for further insight into it. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Ocean Observation (Second Edition))
Show Figures

Figure 1

Figure 1
<p>Operation principle of the scatterometer. Orange and green ellipses respectively represent the footprint of the inner and outer beams.</p>
Full article ">Figure 2
<p>Different operation principles between the proposed concept and the traditional concept.</p>
Full article ">Figure 3
<p>Observation modes of the spaceborne ocean microwave dual-function integrated sensor. The orange and green spirals respectively represent the scanning footprints of the scatterometer’s inner and outer beams. The yellow boxes represent the coverages of the SAR function.</p>
Full article ">Figure 4
<p>Potentialities of the proposed sensor for ocean wind and wave observation. The dashed arrows represent operations related to the single function, and the solid arrows represent dual functions.</p>
Full article ">Figure 5
<p>Star-ground geometrical model.</p>
Full article ">Figure 6
<p>Changes of backscattering coefficient with different wind speeds and incidence angle.</p>
Full article ">Figure 7
<p>Diagram of the time sequence.</p>
Full article ">Figure 8
<p>Interferences of receiving window. (a) Condition 1. (b) Condition 2.</p>
Full article ">Figure 9
<p>Time sequence of the nadir echo. (<b>a</b>) Condition 1. (<b>b</b>) Condition 2.</p>
Full article ">Figure 10
<p>Flow chart of PRF design.</p>
Full article ">Figure 11
<p>Constraint curves of antenna size in range. Points A and B represent two conflicts about antenna size requirements of the two functions at 600–900 km orbital altitudes. Points C and D are the minimum and maximum antenna sizes for SAR functions at the orbital altitude of 900 km, respectively. The peak transmit power for scatterometer function and SAR function is 60 W and 5000 W, respectively. The SNR is 5 dB. The smaller the incidence angle, the smaller the maximum range antenna size to meet the swath requirement of SAR function. Therefore, the simulated incidence angle is set to a minimum value of 20°, so as to ensure that the swath of all beam positions in strip-mapping imaging function is larger than 20 km.</p>
Full article ">Figure 12
<p>Incidence angle range of the scatterometer function. Points E represent the conflict about Incident angle requirements of the two functions.</p>
Full article ">Figure 13
<p>Zebra map of the single-function independent operation. The blacked-out area indicates that the PRFs do not satisfy the all constraints, and the white area indicates that the PRFs do. The yellow and blue vertical lines represent the minimum PRF of scatterometer function and SAR function respectively. (The pulse width is 20 μs, the antenna size in azimuth direction is 10 m, the antenna size in range direction is 0.82 m).</p>
Full article ">Figure 14
<p>Results of the Global Mode2. (<b>a</b>) Zebra map of SAR function. (<b>b</b>) Zebra map of scatterometer function. The red and green short vertical lines represent the beam positions of SAR function and scatterometer function, respectively. The yellow and blue vertical lines represent the minimum PRF of scatterometer function and SAR function respectively. The blue horizontal lines show the incident angle from top to bottom. The rose boxes are used to highlight the selected PRFs. The yellow box is used to highlight the selected area for easy comparison with subsequent simulations.</p>
Full article ">Figure 15
<p>Results of the Local Mode. (<b>a</b>) Zebra map of SAR function. (<b>b</b>) Zebra map of scatterometer function. The red and green short vertical lines represent the beam positions of SAR function and scatterometer function, respectively. The yellow and blue vertical lines represent the minimum PRF of scatterometer function and SAR function respectively. The blue horizontal lines show the incident angle from top to bottom. The rose boxes are used to highlight the selected PRFs. The yellow box is used to highlight the selected area for easy comparison with subsequent simulations.</p>
Full article ">Figure 16
<p>Results of the Regional Mode. (<b>a</b>) Zebra map of SAR function. (<b>b</b>) Zebra map of scatterometer function. The brown areas of different shades represent the constraints on the time sequence are different under different beam positions. Multiple short vertical lines represent the beam positions of SAR function and scatterometer function, respectively. The yellow and blue vertical lines represent the minimum PRF of scatterometer function and SAR function respectively. The blue horizontal lines show the incident angle from top to bottom. The rose boxes are used to highlight the selected PRFs. The yellow box is used to highlight the selected area for easy comparison with subsequent simulations.</p>
Full article ">Figure 17
<p>Results of the Global Mode. (<b>a</b>) Zebra map of SAR function. (<b>b</b>) Zebra map of scatterometer function. The brown areas of different shades represent the constraints on the time sequence are different under different beam positions. Multiple short vertical lines represent the beam positions of SAR function and scatterometer function, respectively. The yellow and blue vertical lines represent the minimum PRF of scatterometer function and SAR function respectively. The blue horizontal lines show the incident angle from top to bottom. The rose boxes are used to highlight the selected PRFs. The yellow box is used to highlight the selected area for easy comparison with subsequent simulations.</p>
Full article ">
22 pages, 8344 KiB  
Article
Impact Analysis and Compensation Methods of Frequency Synchronization Errors in Distributed Geosynchronous Synthetic Aperture Radar
by Xiaoying Sun, Leping Chen, Zhengquan Zhou, Huagui Du and Xiaotao Huang
Remote Sens. 2024, 16(8), 1470; https://doi.org/10.3390/rs16081470 - 21 Apr 2024
Cited by 1 | Viewed by 1163
Abstract
Frequency synchronization error, as one of the inevitable technical challenges in distributed synthetic aperture radar (SAR), has different impacts on different SAR systems. Multi-monostatic SAR is a typical distributed configuration where frequency synchronization errors are tiny in distributed airborne and low earth orbit [...] Read more.
Frequency synchronization error, as one of the inevitable technical challenges in distributed synthetic aperture radar (SAR), has different impacts on different SAR systems. Multi-monostatic SAR is a typical distributed configuration where frequency synchronization errors are tiny in distributed airborne and low earth orbit (LEO) SAR systems. However, due to the long time delay and long synthetic aperture time, the imaging performance of a multi-monostatic geosynchronous (GEO) SAR system is affected by frequency oscillator errors. In this paper, to investigate the frequency synchronization problem in this configuration, we firstly model the echo signals with the frequency synchronization errors, which can be divided into fixed frequency errors and random phase noise. Secondly, we talk about the impacts of the two kinds of errors on imaging performance. To solve the problem, we thirdly propose an autofocus back-projection (ABP) algorithm, which adopts the coordinate descent method and iteratively adjusts the phase error estimation until the image reaches its maximum sharpness. Based on the characteristics of the frequency synchronization errors, we further propose the Node ABP (NABP) algorithm, which greatly reduces the amount of storage and computation compared to the ABP algorithm. Finally, simulations are carried out to validate the effectiveness of the ABP and NABP algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>Typical power spectral density function of the spaceborne SAR system.</p>
Full article ">Figure 2
<p>Distributed configuration of GEO SAR system.</p>
Full article ">Figure 3
<p>Results of bi-monostatic GEO SAR with various frequency errors: (<b>a</b>) BP image with <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>f</mi> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>8</mn> </mrow> </semantics></math>. (<b>b</b>) Azimuth profile of (<b>a</b>). (<b>c</b>) BP image with <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>f</mi> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math>. (<b>d</b>) Azimuth profile of (<b>c</b>). (<b>e</b>) BP image with <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>f</mi> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>. (<b>f</b>) Azimuth profile of (<b>e</b>). (<b>g</b>) BP image with <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>f</mi> <mo>=</mo> <mi>π</mi> </mrow> </semantics></math>. (<b>h</b>) Azimuth profile of (<b>g</b>).</p>
Full article ">Figure 4
<p>BP images of multi-monostatic GEO SARs with various frequency errors: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>f</mi> <mi>n</mi> </msub> <mo>=</mo> <mo>±</mo> <mn>0.26</mn> <mrow> <mo> </mo> <mi>Hz</mi> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>f</mi> <mi>n</mi> </msub> <mo>=</mo> <mo>±</mo> <mn>0.5</mn> <mrow> <mo> </mo> <mi>Hz</mi> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>f</mi> <mi>n</mi> </msub> <mo>=</mo> <mo>±</mo> <mn>1</mn> <mrow> <mo> </mo> <mi>Hz</mi> </mrow> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>f</mi> <mi>n</mi> </msub> </mrow> </mfenced> <mo>≤</mo> <mn>0.5</mn> <mrow> <mo> </mo> <mi>Hz</mi> </mrow> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>f</mi> <mi>n</mi> </msub> </mrow> </mfenced> <mo>≤</mo> <mn>1</mn> <mrow> <mo> </mo> <mi>Hz</mi> </mrow> </mrow> </semantics></math>; and (<b>f</b>) <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>f</mi> <mi>n</mi> </msub> </mrow> </mfenced> <mo>≤</mo> <mn>2</mn> <mrow> <mo> </mo> <mi>Hz</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Azimuth profiles of multi-monostatic GEO SARs with various frequency errors. (<b>a</b>) Given frequency errors. (<b>b</b>) Random frequency errors.</p>
Full article ">Figure 6
<p>The power spectrum density functions of random phase noise for GEO SAR and LEO SAR.</p>
Full article ">Figure 7
<p>Influence of phase noise with synthetic aperture time. (<b>a</b>) Standard deviation of QPE varies with synthetic aperture time. (<b>b</b>) ISLR caused by high-frequency phase noise varies with synthetic aperture time.</p>
Full article ">Figure 8
<p>Influence of phase noise in monostatic and multi-monostatic GEO SAR system. (<b>a</b>) Phase noise in monostatic and multi-monostatic GEO SAR system. (<b>b</b>) The azimuth profiles with the two kinds of phase noise.</p>
Full article ">Figure 9
<p>Flow chart of the ABP algorithm.</p>
Full article ">Figure 10
<p>Results of ABP. (<b>a</b>) Estimations of azimuth samples. (<b>b</b>) Estimations of azimuth node samples. (<b>c</b>) The azimuth profiles after ABP processing.</p>
Full article ">Figure 11
<p>Results of ABP. (<b>a</b>) Result after ABP processing. (<b>b</b>) Result after node error compensation.</p>
Full article ">Figure 12
<p>Sharpness and estimation MSE after ABP. (<b>a</b>) Sharpness after ABP; (<b>b</b>) estimation MSE after ABP.</p>
Full article ">Figure 13
<p>Schematic diagram of the NABP algorithm.</p>
Full article ">Figure 14
<p>Flow chart of the NABP algorithm.</p>
Full article ">Figure 15
<p>The variation in sharpness values with the number of iterations.</p>
Full article ">Figure 16
<p>The azimuth profiles in different cases.</p>
Full article ">
17 pages, 1665 KiB  
Article
Impacts of the Sudden Stratospheric Warming on Equatorial Plasma Bubbles: Suppression of EPBs and Quasi-6-Day Oscillations
by Ercha Aa, Nicholas M. Pedatella and Guiping Liu
Remote Sens. 2024, 16(8), 1469; https://doi.org/10.3390/rs16081469 - 21 Apr 2024
Cited by 1 | Viewed by 1328
Abstract
This study investigates the day-to-day variability of equatorial plasma bubbles (EPBs) over the Atlantic–American region and their connections to atmospheric planetary waves during the sudden stratospheric warming (SSW) event of 2021. The investigation is conducted on the basis of the GOLD (Global Observations [...] Read more.
This study investigates the day-to-day variability of equatorial plasma bubbles (EPBs) over the Atlantic–American region and their connections to atmospheric planetary waves during the sudden stratospheric warming (SSW) event of 2021. The investigation is conducted on the basis of the GOLD (Global Observations of the Limb and Disk) observations, the ICON (Ionospheric Connection Explorer) neutral wind dataset, ionosonde measurements, and simulations from the WACCM-X (Whole Atmosphere Community Climate Model with thermosphere–ionosphere eXtension). We found that the intensity of EPBs was notably reduced by 35% during the SSW compared with the non-SSW period. Furthermore, GOLD observations and ionosonde data show that significant quasi-6-day oscillation (Q6DO) was observed in both the intensity of EPBs and the localized growth rate of Rayleigh–Taylor (R-T) instability during the 2021 SSW event. The analysis of WACCM-X simulations and ICON neutral winds reveals that the Q6DO pattern coincided with an amplification of the quasi-6-day wave (Q6DW) in WACCM-X simulations and noticeable ∼6-day periodicity in ICON zonal winds. The combination of these multi-instrument observations and numerical simulations demonstrates that certain planetary waves like the Q6DW can significantly influence the day-to-day variability of EPBs, especially during the SSW period, through modulating the strength of prereversal enhancement and the growth rate of R-T instability via the wind-driven dynamo. These findings provide novel insights into the connection between atmospheric planetary waves and ionospheric EPBs. Full article
Show Figures

Figure 1

Figure 1
<p>Temporal variations in (<b>a</b>) stratospheric polar temperature at 60–90°N, 10 hPa (red) and zonal mean zonal wind at 60°N, 10 hPa (blue), (<b>b</b>) the amplitude of planetary waves with zonal wavenumber 1 (solid) and 2 (dashed) at 60°N, 10 hPa, (<b>c</b>) F10.7 cm solar radio flux, (<b>d</b>) Kp index, and (<b>e</b>) the amplitude of the westward wavenumber-1 Q6DW in temperature given by WACCM-X simulation (LA + S/G) with respect to the cumulative days from 1 January 2021.</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>b</b>) Disk images of GOLD nighttime measurements of OI 135.6 nm emission at 23:40 UT on 28 December 2020 (before SSW) and 8 January 2021 (during SSW) in geomagnetic coordinates. The red lines denote the chosen equatorial region between ±8° MLAT. (<b>c</b>,<b>d</b>) The corresponding longitudinal fluctuation of the normalized differential emission radiance (ΔR) after removing the MLAT-integrated background reference. (<b>e</b>,<b>f</b>) Temporal variation in the GOLD Bubble Index and associated wavelet power spectrum since 1 January 2021. Circles mark the 95% significance level. The black line in panel f represents the temporal variation in the averaged power spectrum with a periodicity of 5.5–8 days (i.e., the Q6DO intensity).</p>
Full article ">Figure 3
<p>Ionosonde observations: of (<b>a</b>) Ne profiles and hmF2 (F2-region peak height). (<b>b</b>) F-region vertical plasma drift at Sao Luis during 4–10 January 2021. (<b>c</b>) Temporal fluctuation of the derived linear growth rate of the R-T instability. (<b>d</b>,<b>e</b>) Day-to-day variation in postsunset maximum values of R-T instability growth rate and corresponding wavelet analysis with respect to cumulative days since 1 January 2021.</p>
Full article ">Figure 4
<p>Periodograms of the ICON/MIGHTI zonal winds measurements at 100 km during 15–30 January 2021.</p>
Full article ">Figure 5
<p>(<b>a</b>) WACCM-X simulations (LA only) of the normalized growth rate of the R-T instability (blue) and normalized PRE (green) with respect to the cumulative days from 1 January 2021. (<b>b</b>) Cross-wavelet analysis of the WACCM-X normalized R-T instability and PRE time series. The relative phase is represented by black arrows, pointing right for in-phase and left for antiphase.</p>
Full article ">
17 pages, 13321 KiB  
Article
AIDER: Aircraft Icing Potential Area DEtection in Real-Time Using 3-Dimensional Radar and Atmospheric Variables
by Yura Kim, Bo-Young Ye and Mi-Kyung Suk
Remote Sens. 2024, 16(8), 1468; https://doi.org/10.3390/rs16081468 - 21 Apr 2024
Viewed by 1675
Abstract
Aircraft icing refers to the accumulation of ice on the surface and components of an aircraft when supercooled water droplets collide with the aircraft above freezing levels (at altitudes at which the temperature is below 0 °C), which requires vigilant monitoring to avert [...] Read more.
Aircraft icing refers to the accumulation of ice on the surface and components of an aircraft when supercooled water droplets collide with the aircraft above freezing levels (at altitudes at which the temperature is below 0 °C), which requires vigilant monitoring to avert aviation accidents attributable to icing. In response to this imperative, the Weather Radar Center (WRC) of the Korea Meteorological Administration (KMA) has developed a real-time icing detection algorithm. We utilized 3D dual-polarimetric radar variables, 3D atmospheric variables, and aircraft icing data and statistically analyzed these variables within the icing areas determined by aircraft icing data from 2018–2022. An algorithm capable of detecting icing potential areas (icing potential) was formulated by applying these characteristics. Employing this detection algorithm enabled the classification of icing potential into three stages: precipitation, icing caution, and icing warning. The algorithm was validated, demonstrating a notable performance with a probability of detection value of 0.88. The algorithm was applied to three distinct icing cases under varying environmental conditions—frontal, stratiform, and cumuliform clouds—thereby offering real-time observable icing potential across the entire Korean Peninsula. Full article
(This article belongs to the Special Issue Synergetic Remote Sensing of Clouds and Precipitation II)
Show Figures

Figure 1

Figure 1
<p>Composite map of radar observations. Yellow circles indicate the observation areas of the Korea Meteorological Administration (KMA) radars and green circles indicate the observation areas of the Ministry of Environment (ME) radars. Yellow or green squares indicate the locations of KMA radars (10) or ME radars (5), respectively.</p>
Full article ">Figure 2
<p>Photograph of an atmospheric research aircraft (NARA). The red dotted circle indicates the icing detector, which is enlarged in the red square.</p>
Full article ">Figure 3
<p>Conceptual diagram for time and location matching between 3D gridded data and aircraft data. The line from blue to yellow indicates the aircraft route (where the flight departs and arrives); the red line indicates a route where icing occurred; black dots represent point for occurrence of icing; cuboids represent the 3D gridded data (radar and atmospheric variables).</p>
Full article ">Figure 4
<p>Histograms and probability density functions (black lines) of (<b>a</b>) Z<sub>H</sub>, (<b>b</b>) Z<sub>DR</sub>, (<b>c</b>) T, and (<b>d</b>) RH.</p>
Full article ">Figure 5
<p>Algorithm for detection of icing potential.</p>
Full article ">Figure 6
<p>Verification results for a total of ten cases. (<b>a</b>) Distribution of hit and miss locations; (<b>b</b>) height of hits and misses according to the longitude. Blue square: hit, red cross: miss.</p>
Full article ">Figure 7
<p>Images of an icing case associated with low pressure and fronts at 1325 KST on 13 March 2022; (<b>a</b>) Z<sub>H</sub>, (<b>b</b>) Z<sub>DR</sub>, (<b>c</b>) HC, (<b>f</b>) LWC at 3 km height, vertical cross sections (pick line A–B) of (<b>d</b>) Z<sub>H</sub> and (<b>e</b>) HC, (<b>g</b>) icing potential (icing caution: light purple, icing warning: purple), (<b>h</b>) maximum height of icing potential, and (<b>i</b>) vertical cross section of icing potential. Red box: location of the air flight route where icing occurred.</p>
Full article ">Figure 8
<p>Images of an icing case involving stratiform clouds at 1200 KST on 21 November 2018: (<b>a</b>) Z<sub>H</sub>, (<b>b</b>) Z<sub>DR</sub>, (<b>c</b>) HC, (<b>f</b>) LWC at 3 km height, vertical cross sections (pink line A–B) of (<b>d</b>) Z<sub>H</sub> and (<b>e</b>) HC, (<b>g</b>) icing potential, (<b>h</b>) maximum height of icing potential, and (<b>i</b>) vertical cross section of icing potential. Red box: location of the air flight route where icing occurred.</p>
Full article ">Figure 9
<p>Images of an icing case involving cumuliform clouds at 1210 KST on 5 October 2018: (<b>a</b>) Z<sub>H</sub>, (<b>b</b>) Z<sub>DR</sub>, (<b>c</b>) HC, (<b>f</b>) LWC at 4.5 km height, vertical cross sections (pink line A–B) of (<b>d</b>) Z<sub>H</sub> and (<b>e</b>) HC, (<b>g</b>) icing potential, (<b>h</b>) maximum height of icing potential, and (<b>i</b>) vertical cross section of icing potential. Red box: location of the air flight route where icing occurred.</p>
Full article ">
17 pages, 5075 KiB  
Article
CNN-BiLSTM: A Novel Deep Learning Model for Near-Real-Time Daily Wildfire Spread Prediction
by Mohammad Marjani, Masoud Mahdianpari and Fariba Mohammadimanesh
Remote Sens. 2024, 16(8), 1467; https://doi.org/10.3390/rs16081467 - 20 Apr 2024
Cited by 19 | Viewed by 4622
Abstract
Wildfires significantly threaten ecosystems and human lives, necessitating effective prediction models for the management of this destructive phenomenon. This study integrates Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) modules to develop a novel deep learning model called CNN-BiLSTM for near-real-time [...] Read more.
Wildfires significantly threaten ecosystems and human lives, necessitating effective prediction models for the management of this destructive phenomenon. This study integrates Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) modules to develop a novel deep learning model called CNN-BiLSTM for near-real-time wildfire spread prediction to capture spatial and temporal patterns. This study uses the Visible Infrared Imaging Radiometer Suite (VIIRS) active fire product and a wide range of environmental variables, including topography, land cover, temperature, NDVI, wind informaiton, precipitation, soil moisture, and runoff to train the CNN-BiLSTM model. A comprehensive exploration of parameter configurations and settings was conducted to optimize the model’s performance. The evaluation results and their comparison with benchmark models, such as a Long Short-Term Memory (LSTM) and CNN-LSTM models, demonstrate the effectiveness of the CNN-BiLSTM model with IoU of F1 Score of 0.58 and 0.73 for validation and training sets, respectively. This innovative approach offers a promising avenue for enhancing wildfire management efforts through its capacity for near-real-time prediction, marking a significant step forward in mitigating the impact of wildfires. Full article
Show Figures

Figure 1

Figure 1
<p>The geographical location of the study area, highlighting the distribution of active fires obtained from the VIIRS dataset. The daily temperature information was extracted from ERA-5, and precipitation data was sourced from the Australian Landscape Water Balance.</p>
Full article ">Figure 2
<p>Patch extraction process and data structure preparation: section (<b>A</b>) shows initial data structure before patch extraction with daily temporal resolution; section (<b>B</b>) provides information about the data shapes; section (<b>C</b>) indicates the 50% overlap for two consecutive patches (N and N + 1) in x direction.</p>
Full article ">Figure 3
<p>The overall framework of TD-CNNs. In this visualization, the orange cubes represent the input tensors, and the purple cubes signify the extracted features using CNNs. Additionally, the variable T denotes the time dimension.</p>
Full article ">Figure 4
<p>The architecture of the proposed CNN-BiLSTM model.</p>
Full article ">Figure 5
<p>Schematic definition of TP, FN, FP, and TN.</p>
Full article ">Figure 6
<p>Schematic definition of precision and recall.</p>
Full article ">Figure 7
<p>Histograms of the IoU, recall, precision, and F1 of the CNN-BiLSTM model for the validation set (<b>top</b>) and training set (<b>bottom</b>).</p>
Full article ">Figure 8
<p>CNN-BiLSTM predictions for 20 validation samples. The limitation of the model in predictions is highlighted with an orange arrow.</p>
Full article ">Figure 9
<p>The IoU score of validation set for different threshold values. The vertical red line shows where the threshold value is optimal and the green cross marker indicates the IoU based on the optimal threshold value.</p>
Full article ">Figure 10
<p>Histograms of the IoU for different time steps.</p>
Full article ">Figure 11
<p>Daily correlation between predictions and environment variables.</p>
Full article ">
25 pages, 5704 KiB  
Article
A Metadata-Enhanced Deep Learning Method for Sea Surface Height and Mesoscale Eddy Prediction
by Rongjie Zhu, Biao Song, Zhongfeng Qiu and Yuan Tian
Remote Sens. 2024, 16(8), 1466; https://doi.org/10.3390/rs16081466 - 20 Apr 2024
Cited by 2 | Viewed by 2006
Abstract
Predicting the mesoscale eddies in the ocean is crucial for advancing our understanding of the ocean and climate systems. Establishing spatio-temporal correlation among input data is a significant challenge in mesoscale eddy prediction tasks, especially for deep learning techniques. In this paper, we [...] Read more.
Predicting the mesoscale eddies in the ocean is crucial for advancing our understanding of the ocean and climate systems. Establishing spatio-temporal correlation among input data is a significant challenge in mesoscale eddy prediction tasks, especially for deep learning techniques. In this paper, we first present a deep learning solution based on a video prediction model to capture the spatio-temporal correlation and predict future sea surface height data accurately. To enhance the performance of the model, we introduced a novel metadata embedding module that utilizes neural networks to fuse remote sensing metadata with input data, resulting in increased accuracy. To the best of our knowledge, our model outperforms the state-of-the-art method for predicting sea level anomalies. Consequently, a mesoscale eddy detection algorithm will be applied to the predicted sea surface height data to generate mesoscale eddies in future. The proposed solution achieves competitive results, indicating that the prediction error for the eddy center position is 5.6 km for a 3-day prediction and 13.6 km for a 7-day prediction. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of our eddy prediction method. In the SSH prediction component, a deep learning model consisting of an Encoder, Metadata Embedding Module, Predictor, and Decoder takes a sequence of sea surface height and velocity along with its metadata as the input to predict the sea surface height several days later, utilizing MSE Loss for training. In the mesoscale eddy prediction component, the trained SSH prediction model is employed to predict sea surface height, which is then used for detecting mesoscale eddies, thus obtaining the prediction results for mesoscale eddies.</p>
Full article ">Figure 2
<p>The architecture of our SSH prediction model. This convolution-based deep learning model consists of an Encoder, Metadata Embedding Module, Predictor, and Decoder, using sea surface height and velocity data as the input. <a href="#sec4dot2dot5-remotesensing-16-01466" class="html-sec">Section 4.2.5</a> provides a detailed overview of the model’s composition.</p>
Full article ">Figure 3
<p>A simplified image representation of the Metadata Embedding Module.</p>
Full article ">Figure 4
<p>The error distribution in the large area ADT predictions. Each graph shows the absolute error (m) on the horizontal axis and the frequency on the vertical axis. The first and second images show the results for predictions over 3 days and 7 days, respectively.</p>
Full article ">Figure 5
<p>The distribution of RMSE across different regions. The points on the graph represent the central coordinates of a data point. The axes represent longitude and latitude in degrees, with western longitude and northern latitude specified as positive directions.</p>
Full article ">Figure 6
<p>A sample of single-window area 3-day prediction results. The columns from left to right show the predicted image, the ground truth image, the error, and the scatter plot with the fitted line of predicted value versus true value. The error is the result obtained by calculating the difference between the predicted image and the ground truth image. The color bar and scatter plot measurements are in meters.</p>
Full article ">Figure 7
<p>A sample of single-window area 7-day prediction results. The columns from left to right show the predicted image, the ground truth image, the error, and the scatter plot with the fitted line of predicted value versus true value. The error is the result obtained by calculating the difference between the predicted image and the ground truth image. The color bar and scatter plot measurements are in meters.</p>
Full article ">Figure 8
<p>The error distribution in the single-window area SLA predictions. Each graph shows the absolute error (m) on the horizontal axis and the frequency on the vertical axis. The first and second columns represent the results of Region 2 and Region 3 in the test data, while the first and second rows show the results for predictions over 3 days and 7 days, respectively.</p>
Full article ">Figure 9
<p>Examples of eddy detection results with SLA maps. The white circles represent the shape of the eddies, with the points inside the circles indicating the position of the eddy centers. This shape and center position information is calculated by the eddy detection algorithm.</p>
Full article ">Figure 10
<p>Total detection count of cyclonic/anticyclonic eddies in Area 1 and Area 2.</p>
Full article ">Figure 11
<p>Distributions of eddy center distances at different time steps in different regions.</p>
Full article ">
15 pages, 40498 KiB  
Technical Note
Diapiric Structures in the Tinto River Estuary (SW Spain) Caused by Artificial Load of an Industrial Stockpile
by Juan A. Morales, Berta M. Carro, José Borrego, Antonio J. Diosdado, María Eugenia Aguilar and Miguel A. González
Remote Sens. 2024, 16(8), 1465; https://doi.org/10.3390/rs16081465 - 20 Apr 2024
Viewed by 2763
Abstract
The mouth of the Tinto River is located on the southwest coast of the Iberian Peninsula in the northwest of the Gulf of Cadiz. The river flows into an estuarine system shared with the Odiel River, commonly known as the “Ría de Huelva”. [...] Read more.
The mouth of the Tinto River is located on the southwest coast of the Iberian Peninsula in the northwest of the Gulf of Cadiz. The river flows into an estuarine system shared with the Odiel River, commonly known as the “Ría de Huelva”. In the 1960s, a wide area of ancient salt marshes was transformed by a stockpile of industrial wastes of phosphogypsum, reaching a height of 35 m above the level of the salt marsh at its highest point. Two surveys using high-resolution seismic reflection in conjunction with a parametric profiler were carried out in 2016 and 2018. The purpose of these geophysical studies was the realization of a 3D model of the sedimentary units constituting the most recent filling of the estuary. The records present abundant extrusion structures located on the margins of the waste stockpiles, which break the visible stratification of the surficial units of the estuary. In some sectors, these structures have reached the estuarine surface and have, therefore, a morphological expression on the estuarine floor. The origin of these structures is interpreted as a vertical escape of fluidized sediments from lower units caused by overpressure from stacking. Full article
(This article belongs to the Special Issue Advances in Remote Sensing in Coastal Geomorphology (Third Edition))
Show Figures

Figure 1

Figure 1
<p>Location of the study area within the Tinto River Estuary (<b>A</b>) and the phosphogypsum stockpiles (<b>B</b>). The location of the profile in <a href="#remotesensing-16-01465-f002" class="html-fig">Figure 2</a>B is indicated.</p>
Full article ">Figure 2
<p>Geological setting of the stockpiles. (<b>A</b>) Synthetic sedimentary sequence under the stockpiles (Carro et al., 2018). (<b>B</b>) Geological profile showing the geometry of the sedimentary units and record of accumulated subsidence from geological information in previous studies [<a href="#B6-remotesensing-16-01465" class="html-bibr">6</a>,<a href="#B7-remotesensing-16-01465" class="html-bibr">7</a>].</p>
Full article ">Figure 3
<p>Location of the longitudinal tracks of seismic profiles (yellow lines) and vibracores (red circles).</p>
Full article ">Figure 4
<p>Appearance of the two defined seismic facies.</p>
Full article ">Figure 5
<p>Sedimentary sequences in the seven vibracores in the study. Level 0 corresponds to the equinox extremely low water level.</p>
Full article ">Figure 6
<p>Seismic profiles in which different fluidized sediment injection structures are observed. (<b>A</b>) Plumes. (<b>B</b>) Diapirs without surficial expression. (<b>C</b>) Diapirs with surficial expression. (<b>D</b>) Dome.</p>
Full article ">Figure 7
<p>Evolution of injection structures in the study area between December 2016 and March 2018. (<b>A</b>) Plumes. (<b>B</b>) Diapirs. Red dashed line shows the position of the boundary of the injections in December 2016.</p>
Full article ">Figure 8
<p>Conceptual model of the injection structures studied at different stages of evolution. (<b>A</b>) Fluid plumes with surficial pockmarks. (<b>B</b>) Deep diapirs with no surficial expression. (<b>C</b>) Reaching surface diapirs with mud volcanoes. (<b>D</b>) Extensive dome with surficial bulges.</p>
Full article ">Figure 9
<p>Diagram block linking a representation of the multibeam echosound record cut by a seismic profile.</p>
Full article ">
19 pages, 11541 KiB  
Article
Integrating Optical and SAR Time Series Images for Unsupervised Domain Adaptive Crop Mapping
by Luwei Feng, Dawei Gui, Shanshan Han, Tianqi Qiu and Yumiao Wang
Remote Sens. 2024, 16(8), 1464; https://doi.org/10.3390/rs16081464 - 20 Apr 2024
Cited by 2 | Viewed by 1862
Abstract
Accurate crop mapping is crucial for ensuring food security. Recently, many studies have developed diverse crop mapping models based on deep learning. However, these models generally rely on a large amount of labeled crop samples to investigate the intricate relationship between the crop [...] Read more.
Accurate crop mapping is crucial for ensuring food security. Recently, many studies have developed diverse crop mapping models based on deep learning. However, these models generally rely on a large amount of labeled crop samples to investigate the intricate relationship between the crop types of the samples and the corresponding remote sensing features. Moreover, their efficacy is often compromised when applied to other areas owing to the disparities between source and target data. To address this issue, a new multi-modal deep adaptation crop classification network (MDACCN) was proposed in this study. Specifically, MDACCN synergistically exploits time series optical and SAR images using a middle fusion strategy to achieve good classification capacity. Additionally, local maximum mean discrepancy (LMMD) is embedded into the model to measure and decrease domain discrepancies between source and target domains. As a result, a well-trained model in a source domain can still maintain satisfactory accuracy when applied to a target domain. In the training process, MDACCN incorporates the labeled samples from a source domain and unlabeled samples from a target domain. When it comes to the inference process, only unlabeled samples of the target domain are required. To assess the validity of the proposed model, Arkansas State in the United States was chosen as the source domain, and Heilongjiang Province in China was selected as the target domain. Supervised deep learning and traditional machine learning models were chosen as comparison models. The results indicated that the MDACCN achieved inspiring performance in the target domain, surpassing other models with overall accuracy, Kappa, and a macro-averaged F1 score of 0.878, 0.810, and 0.746, respectively. In addition, the crop-type maps produced by the MDACCN exhibited greater consistency with the reference maps. Moreover, the integration of optical and SAR features exhibited a substantial improvement of the model in the target domain compared with using single-modal features. This study indicated the considerable potential of combining multi-modal remote sensing data and an unsupervised domain adaptive approach to provide reliable crop distribution information in areas where labeled samples are missing. Full article
Show Figures

Figure 1

Figure 1
<p>Study areas (<b>a</b>) Arkansas State and (<b>b</b>) HLJ Province.</p>
Full article ">Figure 2
<p>Crop planting and harvesting date.</p>
Full article ">Figure 3
<p>Architecture of MDACCN. <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>x</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>x</mi> </mrow> <mrow> <mi>t</mi> </mrow> </msup> </mrow> </semantics></math> denote the source domain instances <math display="inline"><semantics> <mrow> <mi>s</mi> </mrow> </semantics></math> and target domain instances <math display="inline"><semantics> <mrow> <mi>t</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>o</mi> <mi>p</mi> <mi>t</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>A</mi> <mi>R</mi> </mrow> </semantics></math> denote the optical and SAR features, respectively. LMMD needs four inputs: the true label <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>y</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msup> <mo>,</mo> </mrow> </semantics></math> the predicted label <math display="inline"><semantics> <mrow> <msup> <mrow> <mover accent="true"> <mrow> <mi>y</mi> </mrow> <mo stretchy="false">^</mo> </mover> </mrow> <mrow> <mi>t</mi> </mrow> </msup> </mrow> </semantics></math>, and the activated intermediate features <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>z</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>z</mi> </mrow> <mrow> <mi>t</mi> </mrow> </msup> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 4
<p>Global domain adaptation and subdomain adaptation (Blue and red represent the source domain and target domain, respectively. Circles and squares represent different categories).</p>
Full article ">Figure 5
<p>Architectures of the (<b>a</b>) early-fusion-based model and (<b>b</b>) decision-fusion-based model.</p>
Full article ">Figure 6
<p>Confusion matrices of (<b>a</b>) RF, (<b>b</b>) SDNN, and (<b>c</b>) MDACCN with (1) S1 features, (2) S2 features, and (3) S1S2 features using the testing data from Arkansas.</p>
Full article ">Figure 7
<p>Confusion matrices of (<b>a</b>) RF, (<b>b</b>) SDNN, and (<b>c</b>) MDACCN with (1) S1 features, (2) S2 features, and (3) S1S2 features using the testing data from HLJ.</p>
Full article ">Figure 8
<p>Classification accuracy of the models for each crop with different fusion schemes using the testing data from HLJ Province.</p>
Full article ">Figure 9
<p>Crop maps from (<b>a</b>) reference maps, (<b>b</b>) RF, (<b>c</b>) SDNN, and (<b>d</b>) MDACCN of the mapping areas (1–3) in HLJ Province.</p>
Full article ">Figure 10
<p>Distributions of (1) original SAR features and (2) extracted SAR features of (<b>a</b>) soybean, (<b>b</b>) corn, (<b>c</b>) rice, and (<b>d</b>) “others” using the testing data of the source and target domains.</p>
Full article ">Figure 11
<p>Distributions of (1) original optical features and (2) extracted optical features of (<b>a</b>) soybean, (<b>b</b>) corn, (<b>c</b>) rice, and (<b>d</b>) “others” using the testing data of the source and target domains.</p>
Full article ">
19 pages, 6258 KiB  
Article
Locating and Grading of Lidar-Observed Aircraft Wake Vortex Based on Convolutional Neural Networks
by Xinyu Zhang, Hongwei Zhang, Qichao Wang, Xiaoying Liu, Shouxin Liu, Rongchuan Zhang, Rongzhong Li and Songhua Wu
Remote Sens. 2024, 16(8), 1463; https://doi.org/10.3390/rs16081463 - 20 Apr 2024
Cited by 1 | Viewed by 1475
Abstract
Aircraft wake vortices are serious threats to aviation safety. The Pulsed Coherent Doppler Lidar (PCDL) has been widely used in the observation of aircraft wake vortices due to its advantages of high spatial-temporal resolution and high precision. However, the post-processing algorithms require significant [...] Read more.
Aircraft wake vortices are serious threats to aviation safety. The Pulsed Coherent Doppler Lidar (PCDL) has been widely used in the observation of aircraft wake vortices due to its advantages of high spatial-temporal resolution and high precision. However, the post-processing algorithms require significant computing resources, which cannot achieve the real-time detection of a wake vortex (WV). This paper presents an improved Convolutional Neural Network (CNN) method for WV locating and grading based on PCDL data to avoid the influence of unstable ambient wind fields on the localization and classification results of WV. Typical WV cases are selected for analysis, and the WV locating and grading models are validated on different test sets. The consistency of the analytical algorithm and the CNN algorithm is verified. The results indicate that the improved CNN method achieves satisfactory recognition accuracy with higher efficiency and better robustness, especially in the case of strong turbulence, where the CNN method recognizes the wake vortex while the analytical method cannot. The improved CNN method is expected to be applied to optimize the current aircraft spacing criteria, which is promising in terms of aviation safety and economic benefit improvement. Full article
(This article belongs to the Special Issue Computer Vision-Based Methods and Tools in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Information of experiments for wake vortex observation based on PCDL. (<b>a</b>) Location of ZUUU and ZSQD. (<b>b</b>) Sketch map of wake vortex observation experiments at ZUUU. (<b>c</b>) Sketch map of wake vortex observation experiments at ZSQD. (<b>d</b>) Lidar position and scanning mode of wake vortex observation experiments at ZUUU. (<b>e</b>) Lidar position and scanning mode of wake vortex observation experiments at ZSQD.</p>
Full article ">Figure 2
<p>Flow chart of the main methodology of aircraft wake vortex locating and grading algorithm.</p>
Full article ">Figure 3
<p>Images of wake vortex observed at ZSQD at 00:47 on 27 April 2020, LST, where the brighter color represents a larger value. (<b>a</b>) The pseudo-color images of radial velocity (RV). (<b>b</b>) The pseudo-color images of spectrum width (SW); (<b>c</b>) The gray-scale images of RV. (<b>d</b>) The gray-scale images of SW. (<b>e</b>) The stacked image of radial velocity and spectrum width (RV-SW).</p>
Full article ">Figure 4
<p>RV-SW images after marking the regions of the wake vortices with dashed boxes, where the brighter color represents a larger value. (<b>a</b>,<b>b</b>) RV-SW images with marked wake vortex regions at ZSQD. (<b>c</b>,<b>d</b>) RV-SW images with marked wake vortex regions at ZUUU.</p>
Full article ">Figure 5
<p>The structure of the WV locating model based on YOLO v4 with different color blocks representing different layers. The color blocks marked with * are the results of combining different layers, the structure of which are explained in the lower part of the diagram marked with *.</p>
Full article ">Figure 6
<p>The WV region data set after extracting the wake vortex feature from the RV-SW data set, where the brighter color represents a larger value. (<b>a</b>) Example of Grade 0. (<b>b</b>) Example of Grade 1. (<b>c</b>) Example of Grade 2. (<b>d</b>) Example of Grade 3.</p>
Full article ">Figure 7
<p>The structure of the deep learning model used for wake vortex grading based on WV region images with different color blocks representing different layers. The color blocks marked with * are the results of combining different layers, the structure of which are explained in the lower part of the diagram marked with *.</p>
Full article ">Figure 8
<p>The images and results of the wake vortex under stable meteorological condition. (<b>a</b>) Spectrum width image of the wake vortex. (<b>b</b>) Radial velocity image of the wake vortex. (<b>c</b>) Results of WV locating and grading models when applied on the wake vortex, where the brighter color represents a larger value.</p>
Full article ">Figure 9
<p>The images and results of the wake vortex under strong turbulence condition. (<b>a</b>) Spectrum width image of the wake vortex. (<b>b</b>) Radial velocity image of the wake vortex. (<b>c</b>) Results of horizontal location of the wake vortex when using the analytical algorithm, where the light blue line represents <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>W</mi> <mo>_</mo> <mi>i</mi> <mi>n</mi> <mi>d</mi> </mrow> </semantics></math>, the light pink line represents <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>V</mi> <mo>_</mo> <mi>i</mi> <mi>n</mi> <mi>d</mi> </mrow> </semantics></math>, and the black line represents <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>R</mi> <mo>_</mo> <mi>i</mi> <mi>n</mi> <mi>d</mi> </mrow> </semantics></math>, the red pentagram represents the location of the vortex core as identified by the algorithm. (<b>d</b>) Results of WV locating and grading models when applied on the wake vortex, where the brighter color represents a larger value.</p>
Full article ">Figure 10
<p>The confusion matrix of the WV grading model.</p>
Full article ">Figure 11
<p>The value of EDR<sup>1/3</sup> of typical dates at ZUUU and ZSQD.</p>
Full article ">
24 pages, 9639 KiB  
Article
A Novel Semantic Content-Based Retrieval System for Hyperspectral Remote Sensing Imagery
by Fatih Ömrüuzun, Yasemin Yardımcı Çetin, Uğur Murat Leloğlu and Begüm Demir
Remote Sens. 2024, 16(8), 1462; https://doi.org/10.3390/rs16081462 - 20 Apr 2024
Cited by 1 | Viewed by 2209
Abstract
With the growing use of hyperspectral remote sensing payloads, there has been a significant increase in the number of hyperspectral remote sensing image archives, leading to a massive amount of collected data. This highlights the need for an efficient content-based hyperspectral image retrieval [...] Read more.
With the growing use of hyperspectral remote sensing payloads, there has been a significant increase in the number of hyperspectral remote sensing image archives, leading to a massive amount of collected data. This highlights the need for an efficient content-based hyperspectral image retrieval (CBHIR) system to manage and enable better use of hyperspectral remote-sensing image archives. Conventional CBHIR systems characterize each image by a set of endmembers and then perform image retrieval based on pairwise distance measures. Such an approach significantly increases the computational complexity of the retrieval, mainly when the diversity of materials is high. Those systems also have difficulties in retrieving images containing particular materials with extremely low abundance compared to other materials, which leads to describing image content with inappropriate and/or insufficient spectral features. In this article, a novel CBHIR system to define global hyperspectral image representations based on a semantic approach to differentiate foreground and background image content for different retrieval scenarios is introduced to address these issues. The experiments conducted on a new benchmark archive of multi-label hyperspectral images, which is first introduced in this study, validate the retrieval accuracy and effectiveness of the proposed system. Comparative performance analysis with the state-of-the-art CBHIR systems demonstrates that modeling hyperspectral image content with foreground and background vocabularies has a positive effect on retrieval performance. Full article
Show Figures

Figure 1

Figure 1
<p>Pseudo-color representation of a remote sensing hyperspectral image <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mn>1323</mn> </msub> </semantics></math> (<b>a</b>), illustration of foreground (<b>b</b>) and background (<b>c</b>) image contents.</p>
Full article ">Figure 2
<p>Block diagram of the proposed CBHIR system: green dashed lines represent offline processes that run in the background, while red dashed lines represent online processes.</p>
Full article ">Figure 3
<p>Sample superpixel-based content segmentation with hyperSLIC. (<b>a</b>) False-color original image. (<b>b</b>) False-color segmented image.</p>
Full article ">Figure 4
<p>Sample hyperspectral images with low and high spectral diversity.</p>
Full article ">Figure 5
<p>Background content regions designated by the proposed CBHIR system for hyperspectral remote sensing payload products.</p>
Full article ">Figure 6
<p>Foreground–background content segment classification. (<b>a</b>) False-color original image, (<b>b</b>) segmented image, (<b>c</b>) Mahalanobis score map, (<b>d</b>) foreground–background segment classification map.</p>
Full article ">Figure 7
<p>Illustration of low-dimensional foreground and background content descriptors (where <math display="inline"><semantics> <mrow> <mfenced separators="" open="|" close="|"> <msup> <mi mathvariant="bold">V</mi> <mi>f</mi> </msup> </mfenced> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mfenced separators="" open="|" close="|"> <msup> <mi mathvariant="bold">V</mi> <mi>b</mi> </msup> </mfenced> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 8
<p>Illustration of low-dimensional overall content descriptors (where <math display="inline"><semantics> <mrow> <mfenced separators="" open="|" close="|"> <msup> <mi mathvariant="bold">V</mi> <mi>f</mi> </msup> </mfenced> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mfenced separators="" open="|" close="|"> <msup> <mi mathvariant="bold">V</mi> <mi>b</mi> </msup> </mfenced> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 9
<p>Fingerprint of the area imaged during flight and used in benchmark archive generation.</p>
Full article ">Figure 10
<p>Utilizing VHR imagery for identifying hyperspectral image labels precisely; (<b>a</b>) a data product sliced to obtain hyperspectral images for benchmark archive, (<b>b</b>) corresponding VHR multispectral image section acquired during the same flight.</p>
Full article ">Figure 11
<p>Fieldwork to enhance the accuracy of the content labeling phase. The blue circle indicates the location where the ground-truth picture was taken.</p>
Full article ">Figure 12
<p>Taxonomy of content labels and the corresponding number of images labeled under each individual sub-category.</p>
Full article ">Figure 13
<p>Content-based retrieval results of the proposed CBHIR system, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">X</mi> <mi>q</mi> </msub> <mo>=</mo> <msub> <mi mathvariant="bold">X</mi> <mn>125</mn> </msub> </mrow> </semantics></math>. Content labels of each image are given in <a href="#remotesensing-16-01462-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 14
<p>Content-based retrieval results of the proposed CBHIR system, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">X</mi> <mi>q</mi> </msub> <mo>=</mo> <msub> <mi mathvariant="bold">X</mi> <mn>1211</mn> </msub> </mrow> </semantics></math>. Content labels of each image are given in <a href="#remotesensing-16-01462-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 15
<p>Content-based retrieval results of the proposed CBHIR system, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">X</mi> <mi>q</mi> </msub> <mo>=</mo> <msub> <mi mathvariant="bold">X</mi> <mn>1914</mn> </msub> </mrow> </semantics></math>. Content labels of each image are given in <a href="#remotesensing-16-01462-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 16
<p>Content-based retrieval results of the proposed CBHIR system, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">X</mi> <mi>q</mi> </msub> <mo>=</mo> <msub> <mi mathvariant="bold">X</mi> <mn>2440</mn> </msub> </mrow> </semantics></math>. Content labels of each image are given in <a href="#remotesensing-16-01462-t005" class="html-table">Table 5</a>.</p>
Full article ">
20 pages, 9422 KiB  
Article
Impact of Wildfires on Land Surface Cold Season Climate in the Northern High-Latitudes: A Study on Changes in Vegetation, Snow Dynamics, Albedo, and Radiative Forcing
by Melissa Linares and Wenge Ni-Meister
Remote Sens. 2024, 16(8), 1461; https://doi.org/10.3390/rs16081461 - 20 Apr 2024
Cited by 1 | Viewed by 1905
Abstract
Anthropogenic climate change is increasing the occurrence of wildfires, especially in northern high latitudes, leading to a shift in land surface climate. This study aims to determine the predominant climatic effects of fires in boreal forests to assess their impact on vegetation composition, [...] Read more.
Anthropogenic climate change is increasing the occurrence of wildfires, especially in northern high latitudes, leading to a shift in land surface climate. This study aims to determine the predominant climatic effects of fires in boreal forests to assess their impact on vegetation composition, surface albedo, and snow dynamics. The influence of fire-induced changes on Earth’s radiative forcing is investigated, while considering variations in burn severity and postfire vegetation structure. Six burn sites are explored in central Alaska’s boreal region, alongside six control sites, by utilizing Moderate Resolution Imaging Spectroradiometer (MODIS)-derived albedo, Leaf Area Index (LAI), snowmelt timing data, AmeriFlux radiation, National Land Cover Database (NLCD) land cover, and Monitoring Trends in Burn Severity (MTBS) data. Key findings reveal significant postfire shifts in land cover at each site, mainly from high- to low-stature vegetation. A continuous increase in postfire surface albedo and negative surface shortwave forcing was noted even after 12 years postfire, particularly during the spring and at high-severity burn areas. Results indicate that the cooling effect from increased albedo during the snow season may surpass the warming effects of earlier snowmelt. The overall climate impact of fires depends on burn severity and vegetation composition. Full article
(This article belongs to the Special Issue Remote Sensing of Solar Radiation Absorbed by Land Surfaces)
Show Figures

Figure 1

Figure 1
<p>The study sites are in the center of Alaska, covering six burn sites (red) and their corresponding control sites (blue).</p>
Full article ">Figure 2
<p>Flowchart of Data Processing and Analysis.</p>
Full article ">Figure 3
<p>Column one displays burn severity percent distributions for each site, indicating the extent of fire impact. Column two features corresponding burn severity maps visually representing the distribution of burn severity classes. Columns three and four contrast the NLCD 2001 prefire and NLCD 2016 postfire land cover maps, illustrating changes in vegetation over time. The fifth column provides the NLCD 2001 land cover maps for control sites.</p>
Full article ">Figure 4
<p>2001 and 2016 NLCD Land Cover Percent for Burn Sites. The stacked bars show the percentage of different land cover types at each burn site location in 2001 and 2016, derived from the National Land Cover Database (NLCD). Land cover categories include evergreen forest, deciduous forest, shrub/scrub, emergent herbaceous wetlands, woody wetlands, grassland/herbaceous, open water, and developed/low-intensity areas. The total absolute change percentage and vegetation cover density classifications (dense vs. sparse) are provided for each burn site between the two time periods. This allows for visualizing the impact of fires on shifting land cover compositions and forest density at these locations over 15 years.</p>
Full article ">Figure 5
<p>The summer months (June, July, and August) and Winter Months (January, February, and March) LAI values for all study sites and control sites over two decades. The red dashed line represents the wildfires.</p>
Full article ">Figure 6
<p>The mean difference in snowmelt dates (black solid) between each site and its control site from 2001 to 2018. Trendlines, represented in blue, depicts the prefire trajectory of changes in snowmelt timing, while the red trendline indicates the general postfire trajectory of these changes. The slope of each trendline is provided in the legend for reference.</p>
Full article ">Figure 7
<p>Time series of albedo for different burn severity classes across all burn sites from 2007 to 2021. The black dotted line marks the fire.</p>
Full article ">Figure 8
<p>Monthly mean differences (black solid) in albedo between fire and control sites. Pre- (dotted blue line) and postfire (dotted red line) trendlines indicate the general trajectory of changes in albedo difference over the time series, with the slope of each trendline provided in the legend.</p>
Full article ">Figure 9
<p>Daily mean Surface Shortwave Forcing (SSF) across various burn severity classes spanning from 2010 to 2017.</p>
Full article ">
21 pages, 10230 KiB  
Article
A Super-Resolution Reconstruction Model for Remote Sensing Image Based on Generative Adversarial Networks
by Wenyi Hu, Lei Ju, Yujia Du and Yuxia Li
Remote Sens. 2024, 16(8), 1460; https://doi.org/10.3390/rs16081460 - 20 Apr 2024
Cited by 8 | Viewed by 2307
Abstract
In current times, reconstruction of remote sensing images using super-resolution is a prominent topic of study. Remote sensing data have a complex spatial distribution. Compared with natural pictures, remote sensing pictures often contain subtler and more complicated information. Most super-resolution reconstruction algorithms cannot [...] Read more.
In current times, reconstruction of remote sensing images using super-resolution is a prominent topic of study. Remote sensing data have a complex spatial distribution. Compared with natural pictures, remote sensing pictures often contain subtler and more complicated information. Most super-resolution reconstruction algorithms cannot restore all the information contained in remote sensing images when reconstructing them. The content of some areas in the reconstructed images may be too smooth, and some areas may even have color changes, resulting in lower quality reconstructed images. In response to the problems presenting in current reconstruction algorithms about super-resolution, this article proposes the SRGAN-MSAM-DRC model (SRGAN model with multi-scale attention mechanism and dense residual connection). This model roots in generative adversarial networks and incorporates multi-scale attention mechanisms and dense residual connections into the generator. Furthermore, residual blocks are incorporated into the discriminator. We use some remote sensing image datasets of real-world data to evaluate this model, and the results indicate the SRGAN-MSAM-DRC model has shown enhancements in three evaluation metrics for reconstructed images about super-resolution. Compared to the basic SRGAN model, the SSIM (structural similarity), PSNR (peak signal-to-noise ratio), and IE (image entropy) increase by 5.0%, 4.0%, and 4.1%, respectively. From the results, we know the quality of the reconstructed images of remote sensing using the SRGAN-MSAM-DRC model is better than basic SRGAN model, and verifies that the model has good applicability and performance in reconstruction of remote sensing images using super-resolution. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The SRGAN generator’s architecture.</p>
Full article ">Figure 2
<p>The SRGAN discriminator’s architecture.</p>
Full article ">Figure 3
<p>The architecture of residual blocks in the basic SRGAN model and PSA modules, (<b>a</b>) residual blocks in the basic SRGAN, (<b>b</b>) PSA module.</p>
Full article ">Figure 4
<p>The SRGAN-MSAM-DRC generator’s PSA modules dense residual connection architecture.</p>
Full article ">Figure 5
<p>The SRGAN-MSAM-DRC generator’s architecture.</p>
Full article ">Figure 6
<p>The SRGAN-MSAM-DRC discriminator’s residual block architecture.</p>
Full article ">Figure 7
<p>The SRGAN-MSAM-DRC discriminator’s architecture.</p>
Full article ">Figure 8
<p>UC Merced Land-Use Dataset.</p>
Full article ">Figure 9
<p>The model training progress.</p>
Full article ">Figure 10
<p>The comparison of tennis court reconstruction results.</p>
Full article ">Figure 11
<p>The comparison of beach reconstruction results.</p>
Full article ">Figure 12
<p>The comparison of baseball diamond reconstruction results.</p>
Full article ">Figure 13
<p>PSNR values for different models.</p>
Full article ">Figure 14
<p>SSIM values for different models.</p>
Full article ">Figure 15
<p>IE values for different models.</p>
Full article ">Figure 16
<p>The comparison about airplane reconstruction effects.</p>
Full article ">Figure 17
<p>The comparison about intersection reconstruction effects.</p>
Full article ">Figure 18
<p>The comparison about chaparral reconstruction effects.</p>
Full article ">Figure 19
<p>PSNR values for different models.</p>
Full article ">Figure 20
<p>SSIM values for different models.</p>
Full article ">Figure 21
<p>IE values for different models.</p>
Full article ">Figure 22
<p>The time to reconstruct an image.</p>
Full article ">Figure 23
<p>PSNR values for different models.</p>
Full article ">Figure 24
<p>SSIM values for different models.</p>
Full article ">Figure 25
<p>IE values for different models.</p>
Full article ">Figure 26
<p>The comparison about Indian Pines reconstruction effects.</p>
Full article ">Figure 27
<p>The comparison about Pavia University reconstruction effects.</p>
Full article ">
24 pages, 5011 KiB  
Article
A Sparse SAR Imaging Method for Low-Oversampled Staggered Mode via Compound Regularization
by Mingqian Liu, Jie Pan, Jinbiao Zhu, Zhengchao Chen, Bingchen Zhang and Yirong Wu
Remote Sens. 2024, 16(8), 1459; https://doi.org/10.3390/rs16081459 - 20 Apr 2024
Cited by 5 | Viewed by 1220
Abstract
High-resolution wide-swath (HRWS) imaging is the research focus of the modern spaceborne synthetic-aperture radar (SAR) imaging field, with significant relevance and vast application potential. Staggered SAR, as an innovative imaging system, mitigates blind areas across the entire swath by periodically altering the radar [...] Read more.
High-resolution wide-swath (HRWS) imaging is the research focus of the modern spaceborne synthetic-aperture radar (SAR) imaging field, with significant relevance and vast application potential. Staggered SAR, as an innovative imaging system, mitigates blind areas across the entire swath by periodically altering the radar pulse repetition interval (PRI), thereby extending the swath width to multiples of that achievable by conventional systems. However, the staggered mode introduces inherent challenges, such as nonuniform azimuth sampling and echo data loss, leading to azimuth ambiguities and substantially impacting image quality. This paper proposes a sparse SAR imaging method for the low-oversampled staggered mode via compound regularization. The proposed method not only effectively suppresses azimuth ambiguities arising from nonuniform sampling without necessitating the restoration of missing echo data, but also incorporates total variation (TV) regularization into the sparse reconstruction model. This enhances the accurate reconstruction of distributed targets within the scene. The efficacy of the proposed method is substantiated through simulations and real data experiments from spaceborne missions. Full article
(This article belongs to the Special Issue Spaceborne High-Resolution SAR Imaging)
Show Figures

Figure 1

Figure 1
<p>PRI sequence values and blind areas’ position in the simulation experiments. (<b>a</b>) The fast-changing PRI sequence values correspond to the parameters in <a href="#remotesensing-16-01459-t002" class="html-table">Table 2</a>, where the x-axis ‘m’ is the sequence number; (<b>b</b>) The blind areas’ position of the fast-changing PRI sequence.</p>
Full article ">Figure 2
<p>The 1-D imaging results with different methods. (<b>a</b>–<b>c</b>) are the imaging results of the point target located inside the blind areas, respectively (slant range = 956 km); (<b>d</b>–<b>f</b>) are the imaging results of the point target located at the boundary of the blind areas, respectively (slant range = 982 km); (<b>g</b>–<b>i</b>) are the imaging results of the point target located outside the blind areas, respectively (slant range = 994 km).</p>
Full article ">Figure 3
<p>The oversampling rate–ISLR relationship curves. (<b>a</b>) Using the MF method; (<b>b</b>) using the BLU interpolation method; and (<b>c</b>) using the <span class="html-italic">L</span><sub>1/2</sub>&amp;TV-regularization-based method.</p>
Full article ">Figure 4
<p>The imaging results of a one-dimensional distributed target with different methods, where the distributed target is located inside the blind areas (slant range = 956 km). (<b>a</b>) The ideal case; (<b>b</b>) Using the MF method; (<b>c</b>) Using the BLU interpolation method; and (<b>d</b>) Using the <span class="html-italic">L</span><sub>1/2</sub>&amp;TV-regularization-based method.</p>
Full article ">Figure 5
<p>Comparison curves of the NRMSE reconstructed by BLU interpolation and <span class="html-italic">L</span><sub>1/2</sub>&amp;TV regularization under different SNRs. (<b>a</b>) Inside the blind areas (slant range = 956 km); (<b>b</b>) At the boundary of the blind areas (slant range = 982 km).</p>
Full article ">Figure 6
<p>The oversampling rate–NRMSE relationship curves. (<b>a</b>) Using the BLU interpolation method; (<b>b</b>) Using the <span class="html-italic">L</span><sub>1/2</sub>&amp;TV-regularization-based method.</p>
Full article ">Figure 7
<p>The imaging results of different methods in the first actual scenario. (<b>a</b>) Original echo data imaging processing under constant PRF condition; (<b>b</b>) Direct imaging on staggered-mode SAR data (MF); (<b>c</b>) Using the BLU interpolation method; (<b>d</b>) Using the <span class="html-italic">L</span><sub>1/2</sub>&amp;TV-regularization-based method.</p>
Full article ">Figure 8
<p>The azimuth profiles of the strong point target indicated by the blue arrows in <a href="#remotesensing-16-01459-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 9
<p>The reconstruction results of the island areas represented by the green rectangular dashed frames in <a href="#remotesensing-16-01459-f007" class="html-fig">Figure 7</a>. (<b>a</b>) MF; (<b>b</b>) <span class="html-italic">L</span><sub>1/2</sub>&amp;TV regularization.</p>
Full article ">Figure 10
<p>The imaging results of different methods in the second scenario. (<b>a</b>) Original echo data imaging processing under constant PRF condition; (<b>b</b>) Direct imaging on staggered mode SAR data (MF); (<b>c</b>) Using the BLU interpolation method; (<b>d</b>) Using the <span class="html-italic">L</span><sub>1/2</sub>&amp;TV regularization-based method.</p>
Full article ">
31 pages, 25541 KiB  
Article
Estimation of Small-Stream Water Surface Elevation Using UAV Photogrammetry and Deep Learning
by Radosław Szostak, Marcin Pietroń, Przemysław Wachniew, Mirosław Zimnoch and Paweł Ćwiąkała
Remote Sens. 2024, 16(8), 1458; https://doi.org/10.3390/rs16081458 - 20 Apr 2024
Viewed by 1638
Abstract
Unmanned aerial vehicle (UAV) photogrammetry allows the generation of orthophoto and digital surface model (DSM) rasters of terrain. However, DSMs of water bodies mapped using this technique often reveal distortions in the water surface, thereby impeding the accurate sampling of water surface elevation [...] Read more.
Unmanned aerial vehicle (UAV) photogrammetry allows the generation of orthophoto and digital surface model (DSM) rasters of terrain. However, DSMs of water bodies mapped using this technique often reveal distortions in the water surface, thereby impeding the accurate sampling of water surface elevation (WSE) from DSMs. This study investigates the capability of deep neural networks to accommodate the aforementioned perturbations and effectively estimate WSE from photogrammetric rasters. Convolutional neural networks (CNNs) were employed for this purpose. Two regression approaches utilizing CNNs were explored: direct regression employing an encoder and a solution based on prediction of the weight mask by an autoencoder architecture, subsequently used to sample values from the photogrammetric DSM. The dataset employed in this study comprises data collected from five case studies of small lowland streams in Poland and Denmark, consisting of 322 DSM and orthophoto raster samples. A grid search was employed to identify the optimal combination of encoder, mask generation architecture, and batch size among multiple candidates. Solutions were evaluated using two cross-validation methods: stratified k-fold cross-validation, where validation subsets maintained the same proportion of samples from all case studies, and leave-one-case-out cross-validation, where the validation dataset originates entirely from a single case study, and the training set consists of samples from other case studies. Depending on the case study and the level of validation strictness, the proposed solution achieved a root mean square error (RMSE) ranging between 2 cm and 16 cm. The proposed method outperforms methods based on the straightforward sampling of photogrammetric DSM, achieving, on average, an 84% lower RMSE for stratified cross-validation and a 62% lower RMSE for all-in-case-out cross-validation. By utilizing data from other research, the proposed solution was compared on the same case study with other UAV-based methods. For that benchmark case study, the proposed solution achieved an RMSE score of 5.9 cm for all-in-case-out cross-validation and 3.5 cm for stratified cross-validation, which is close to the result achieved by the radar-based method (RMSE of 3 cm), which is considered the most accurate method available. The proposed solution is characterized by a high degree of explainability and generalization. Full article
Show Figures

Figure 1

Figure 1
<p>Orthophoto mosaics from Grodzisko (<b>a</b>) and Rybna (<b>b</b>) case studies from 13 July 2021.</p>
Full article ">Figure 2
<p>Part of the Kocinka stretch in Rybna (March 2022).</p>
Full article ">Figure 3
<p>Schematic representation of the data set preparation workflow. Colors are for illustration purposes only and do not reflect actual data.</p>
Full article ">Figure 4
<p>Example DSM and orthophoto dataset samples (<b>a</b>–<b>d</b>) with marked areas where the DSM equals the actual WSE ± 5 cm (red color).</p>
Full article ">Figure 5
<p>Direct regression approach—schematic representation. Numbers near arrows provide information about the dimensions of the flowing data.</p>
Full article ">Figure 6
<p>Mask averaging approach—schematic representation. Numbers near arrows provide information about the dimensions of the flowing data.</p>
Full article ">Figure 7
<p>The selection of validation samples for each of the 5 folds in stratified resampling.</p>
Full article ">Figure 8
<p>Predictions of validation subsets from stratified cross-validation plotted against chainage (dark-green points). Compared with ground truth WSE (black line), DSM sampled near streambank (orange points), and DSM sampled at stream centerline (blue points). Columns denote different approaches and rows correspond to distinct case studies.</p>
Full article ">Figure 8 Cont.
<p>Predictions of validation subsets from stratified cross-validation plotted against chainage (dark-green points). Compared with ground truth WSE (black line), DSM sampled near streambank (orange points), and DSM sampled at stream centerline (blue points). Columns denote different approaches and rows correspond to distinct case studies.</p>
Full article ">Figure 9
<p>Predictions of validation subsets from all-in-case-out cross-validation plotted against chainage (dark-green points). Compared with ground truth WSE (black line), DSM sampled near streambank (orange points), and DSM sampled at stream centerline (blue points). Columns denote different approaches and rows correspond to distinct case studies.</p>
Full article ">Figure 9 Cont.
<p>Predictions of validation subsets from all-in-case-out cross-validation plotted against chainage (dark-green points). Compared with ground truth WSE (black line), DSM sampled near streambank (orange points), and DSM sampled at stream centerline (blue points). Columns denote different approaches and rows correspond to distinct case studies.</p>
Full article ">Figure 10
<p>Residuals (ground truth WSE minus predicted WSE) obtained during stratified and all-in-case-out cross-validations for each case study (rows) and method (columns) plotted against chainage.</p>
Full article ">Figure 11
<p>Orthophoto, DSM, and weight masks obtained in stratified and all-in-case-out cross validations for the three best performing samples from the AMO18 case study. <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from most to least performing.</p>
Full article ">Figure 12
<p>Orthophoto, DSM, and masks obtained in stratified and all-in-case-out cross validations for the three best performing samples from the GRO20 case study. <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from most to least performing.</p>
Full article ">Figure 13
<p>Orthophoto, DSM, and masks obtained in stratified and all-in-case-out cross validations for the three best performing samples from the GRO21 case study. <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from most to least performing.</p>
Full article ">Figure 14
<p>Orthophoto, DSM, and masks obtained in stratified and all-in-case-out cross validations for the three best performing samples from the RYB20 case study. <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from most to least performing.</p>
Full article ">Figure 15
<p>Orthophoto, DSM, and masks obtained in stratified and all-in-case-out cross validations for the three best performing samples from the RYB21 case study. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from most to least performing.</p>
Full article ">Figure 16
<p>Orthophoto, DSM, and masks obtained in stratified and all-in-case-out cross validations for the three worst performing samples from the AMO18 case study. <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from least to most performing.</p>
Full article ">Figure 17
<p>Orthophoto, DSM, and masks obtained in stratified and all-in-case-out cross validations for the three worst performing samples from the GRO20 case study. <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from least to most performing.</p>
Full article ">Figure 18
<p>Orthophoto, DSM, and masks obtained in stratified and all-in-case-out cross validations for the three worst performing samples from the GRO21 case study. <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red color on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from least to most performing.</p>
Full article ">Figure 19
<p>Orthophoto, DSM, and masks obtained in stratified and all-in-case-out cross validations for the three worst performing samples from the RYB20 case study. <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from least to most performing.</p>
Full article ">Figure 20
<p>Orthophoto, DSM, and masks obtained in stratified and all-in-case-out cross validations for the three worst performing samples from the RYB21 case study. <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>a</mi> <mi>i</mi> <mi>c</mi> <mi>o</mi> </mrow> </msub> </mrow> </semantics></math> correspond to chainage and residuals obtained using stratified and all-in-case-out cross-validation, respectively. Areas where the DSM equals the actual WSE ± 5 cm are marked in red on the orthophoto and DSM. (<b>A</b>–<b>C</b>) samples ordered from least to most performing.</p>
Full article ">Figure A1
<p>Direct regression—validation RMSEs achieved in different cross-validation folds by each encoder. The error bars, indicating 95th percentile intervals, result from variations in batch sizes tested during experimentation.</p>
Full article ">Figure A2
<p>Direct regression—validation RMSEs achieved in different cross-validation folds by various batch sizes. The error bars, indicating 95th percentile intervals, result from variations in encoders tested during experimentation.</p>
Full article ">Figure A3
<p>Mask averaging—validation RMSEs achieved in different cross-validation folds by various encoders. The error bars, indicating 95th percentile intervals, result from variations in batch sizes and architectures tested during experimentation.</p>
Full article ">Figure A4
<p>Mask averaging—validation RMSEs achieved in different cross-validation folds by various batch sizes. The error bars, indicating 95th percentile intervals, result from variations in encoders and architectures tested during experimentation.</p>
Full article ">Figure A5
<p>Mask averaging—validation RMSEs achieved in different cross-validation folds using various architectures. The error bars, indicating 95th percentile intervals, result from variations in batch sizes and encoders tested during experimentation.</p>
Full article ">Figure A6
<p>Fusion approach—schematic representation. Numbers near the arrows provide information about the dimensions of the flowing data.</p>
Full article ">
16 pages, 24606 KiB  
Article
Estimation of Co-Seismic Surface Deformation Induced by 24 September 2019 Mirpur, Pakistan Earthquake along an Active Blind Fault Using Sentinel-1 TOPS Interferometry
by Muhammad Ali, Gilda Schirinzi, Zeeshan Afzal, Alessandra Budillon, Muhammad Saleem Mughal, Sajid Hussain and Giampaolo Ferraioli
Remote Sens. 2024, 16(8), 1457; https://doi.org/10.3390/rs16081457 - 20 Apr 2024
Cited by 2 | Viewed by 1768
Abstract
Surface deformation caused by an earthquake is very important to study for a better understanding of the development of geological structures and seismic hazards in an active tectonic area. In this study, we estimated the surface deformation due to an earthquake along an [...] Read more.
Surface deformation caused by an earthquake is very important to study for a better understanding of the development of geological structures and seismic hazards in an active tectonic area. In this study, we estimated the surface deformation due to an earthquake along an active blind fault using Sentinel-1 SAR data. On 24 September 2019, an earthquake with 5.6 Mw and 10 km depth stroke near Mirpur, Pakistan. The Mirpur area was highly affected by this earthquake with a huge collapse and the death of 34 people. This study aims to estimate the surface deformation associated with this earthquake in Mirpur and adjacent areas. The interferometric synthetic aperture radar (InSAR) technique was applied to study earthquake-induced surface motion. InSAR data consisting of nine Sentinel-1A SAR images from 11 August 2019 to 22 October 2019 was used to investigate the pre-, co- and post-seismic deformation trends. Time series investigation revealed that there was no significant deformation in the pre-seismic time. In the co-seismic time, strong displacement was observed and in post-seismic results, small displacements were seen due to 4.4 and 3.2 Mw aftershocks. Burst overlap interferometry and offset-tracking analysis were used for more sensitive measurements in the along-track direction. Comprehensive 3D displacement was mapped with the combination of LOS and along-track offset deformation. The major outcome of our results was the confirmation of the existence of a previously unpublished blind fault in Mirpur. Previously, this fault line was triggered during the 2005 earthquake and then it was activated on 24 September 2019. Additionally, we presented the co-seismically induced rockslides and some secondary faulting evidence, most of which occurred along or close to the pre-existing blind faults. The study area already faces many problems due to natural hazards where additional surface deformations, particularly because of the earthquake with activated blind fault, have increased its vulnerability. Full article
(This article belongs to the Special Issue Monitoring Geohazard from Synthetic Aperture Radar Interferometry)
Show Figures

Figure 1

Figure 1
<p>Intensity map of Mirpur earthquake showing shaking intensity following the earthquake with strong shaking observed in the study area (data provided by USGS). (<b>a</b>) Black rectangular shows study area of Mirpur, Pakistan (derived from Open Street Map). The green rectangle shows the ascending swath of Sentinel-1 IW SLC images and the red rectangle shows the descending swath of Sentinel-1 IW SLC images that are used for this study. (<b>b</b>) Contour lines with different colors indicate the intensity of the earthquake, while yellow color star shows the epicenter of the 24 September 2019 Mirpur earthquake; the turquoise color dots show the aftershocks occurring for up to one month (from 24 September 2019 to 22 October 2019) after the mainshock (USGS, 2019).</p>
Full article ">Figure 2
<p>The regional tectonic map shows major geological formations and faults of northwestern Pakistan. Blue rectangle indicates the study area and yellow color star shows the epicenter of the 24 September 2019 Mirpur earthquake. Solid black lines show positions of major geological faults and their divisions, and black circular dotted lines show the identified blind faults in the study area.</p>
Full article ">Figure 3
<p>Frequency distribution of earthquake strikes in Mirpur, Pakistan. Multiple colors indicate the differences in frequency of earthquakes (magnitude ≥ 4) from 1970–2019.</p>
Full article ">Figure 4
<p>Wrapped co-seismic deformation interferograms derived from the Sentinel-1 data. (<b>a</b>) The interferogram is derived from ascending orbit (16 September 2019 to 28 September 2019) and (<b>b</b>) the interferogram is derived from descending orbit (17 September 2019 to 29 September 2019). The solid black lines denote the estimated locations of the thrust faults and black circular dotted lines show the identified blind faults in the study area. The black star indicates the epicenter of the 24 September 2019 Mirpur earthquake derived from USGS data.</p>
Full article ">Figure 5
<p>Unwrapped displacement in cm without atmospheric correction along the LOS estimated by Interferometric SAR using Sentinel-1 data (<b>a</b>) from the ascending orbit between 16 September 2019 and 28 September 2019 and (<b>b</b>) from the descending orbit between 17 September 2019 and 29 September 2019. Both green rectangular dotted lines (A to A′ and B to B′) with reference direction to thrust fault lines are used to extract displacement values along the LOS. The solid black lines show the estimated locations of the thrust faults, black circular dotted lines show the identified blind faults and the black star indicates the epicenter of the Mirpur earthquake on 24 September 2019 derived from USGS data.</p>
Full article ">Figure 6
<p>Graphs based on extracted values of LOS displacement by dotted lines shown in <a href="#remotesensing-16-01457-f006" class="html-fig">Figure 6</a>, (<b>a</b>) shows the LOS displacement for line A to A′ of both ascending and descending orbit and (<b>b</b>) shows the LOS displacement for line B to B′ of both ascending and descending orbit of Sentinel-1 data.</p>
Full article ">Figure 7
<p>Shows the interferometric and combined offset tracking results. (<b>a</b>,<b>d</b>) surface displacement results (without atmospheric correction applied) derived from the interferometry in both descending and ascending orbits of Sentinel-1 images, respectively, and (<b>b</b>,<b>e</b>) surface displacement results retrieved from the combined offset tracking in both descending and ascending orbits of Sentinel-1 images. Where (<b>c</b>,<b>f</b>) surface displacement results obtained along the azimuth direction for both ascending and descending orbits of Sentinel-1 images, respectively. (<b>g</b>–<b>i</b>) Three-dimensional (3D) co-seismic deformation map based on the ascending and descending orbit LOS displacements (from DInSAR) and azimuth displacement (from offset tracking) in the east–west, north–south, and up-down directions, respectively.</p>
Full article ">Figure 8
<p>Unwrapped displacement in cm with atmospheric corrections applied along the LOS estimated by Interferometric SAR using Sentinel-1 data. The polygon with black circular patches indicates the mountainous area, while the polygon with black dotted lines indicates the urban area in Mirpur. The blue arrow marks are the geolocated points of real-time data obtained during the field survey and a, b, c, d are assigned according to <a href="#remotesensing-16-01457-f009" class="html-fig">Figure 9</a>. The solid black lines show the estimated locations of the thrust faults, black circular dotted lines show the identified blind faults, and the black star indicates the epicenter of the Mirpur earthquake on 24 September 2019 derived from USGS data.</p>
Full article ">Figure 9
<p>Photos of surface displacement in Mirpur, Pakistan taken on 25 September 2019, one day after the earthquake (National Disaster Management Authority). (<b>a</b>) Shows the ground photos of secondary faulting observed after the earthquake stroke. (<b>b</b>,<b>d</b>) Show the structure collapse due to the earthquake in Mirpur, Pakistan. (<b>c</b>) Shows the rocks sliding damaged the small road in Mirpur, Pakistan.</p>
Full article ">Figure 10
<p>The estimated threshold displacement (cm) with atmospheric corrections applied along the LOS using SAR Interferometry for the pre-seismic time periods (<b>a</b>) 11 August 2019 to 23 August 2019, (<b>b</b>) 23 August 2019 to 4 September 2019, (<b>c</b>) 4 September 2019 to 16 September 2019; co-seismic time periods (<b>d</b>) 16 September 2019 to 28 September 2019; and post-seismic time period (<b>e</b>) 28 September 2019 to 10 October 2019, and (<b>f</b>) 10 October 2019 to 22 October 2019. Displacement variation showed in different color patterns and areas with no colors experienced no displacement.</p>
Full article ">Figure 11
<p>Temporal variations in average displacement (cm) along the LOS (diamond purple line along the primary <span class="html-italic">y</span>-axis) and rate of change of regional displacement (cm) per day along the LOS (square red line along the secondary <span class="html-italic">y</span>-axis) estimated during pre-seismic, co-seismic and post-seismic time periods from 11 August 2019 to 22 October 2019. The <span class="html-italic">X</span>-axis shows the index of pre-seismic, co-seismic and post-seismic interferograms as shown in <a href="#remotesensing-16-01457-f010" class="html-fig">Figure 10</a>a–f. Error bars show the standard deviation in the regional displacement values along the LOS.</p>
Full article ">
19 pages, 8487 KiB  
Article
MRFA-Net: Multi-Scale Receptive Feature Aggregation Network for Cloud and Shadow Detection
by Jianxiang Wang, Yuanlu Li, Xiaoting Fan, Xin Zhou and Mingxuan Wu
Remote Sens. 2024, 16(8), 1456; https://doi.org/10.3390/rs16081456 - 20 Apr 2024
Viewed by 1178
Abstract
The effective segmentation of clouds and cloud shadows is crucial for surface feature extraction, climate monitoring, and atmospheric correction, but it remains a critical challenge in remote sensing image processing. Cloud features are intricate, with varied distributions and unclear boundaries, making accurate extraction [...] Read more.
The effective segmentation of clouds and cloud shadows is crucial for surface feature extraction, climate monitoring, and atmospheric correction, but it remains a critical challenge in remote sensing image processing. Cloud features are intricate, with varied distributions and unclear boundaries, making accurate extraction difficult, with only a few networks addressing this challenge. To tackle these issues, we introduce a multi-scale receptive field aggregation network (MRFA-Net). The MRFA-Net comprises an MRFA-Encoder and MRFA-Decoder. Within the encoder, the net includes the asymmetric feature extractor module (AFEM) and multi-scale attention, which capture diverse local features and enhance contextual semantic understanding, respectively. The MRFA-Decoder includes the multi-path decoder module (MDM) for blending features and the global feature refinement module (GFRM) for optimizing information via learnable matrix decomposition. Experimental results demonstrate that our model excelled in generalization and segmentation performance when addressing various complex backgrounds and different category detections, exhibiting advantages in terms of parameter efficiency and computational complexity, with the MRFA-Net achieving a mean intersection over union (MIoU) of 94.12% on our custom Cloud and Shadow dataset, and 87.54% on the open-source HRC_WHU dataset, outperforming other models by at least 0.53% and 0.62%. The proposed model demonstrates applicability in practical scenarios where features are difficult to distinguish. Full article
Show Figures

Figure 1

Figure 1
<p>The structure of the MRFA-Net, consisting of encoding and decoding sections. (<b>a</b>) The encoding section primarily employs the AFEM for feature extraction and multi-scale attention, with slight parameter and implementation variations across the stages. (<b>b</b>) The decoding phase predominantly refines details and decodes through the MDM and GFRM at different stages. (n_classes refers to the number of output channels, with each channel corresponding to one category. In this study, cloud and shadow detection involves three classifications: cloud, cloud shadow, and background).</p>
Full article ">Figure 2
<p>The structure of the AFEM. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>v</mi> </mrow> <mrow> <mi>X</mi> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> represents point-wise convolution, whereas <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>v</mi> </mrow> <mrow> <mi>X</mi> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>v</mi> </mrow> <mrow> <mi>X</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>v</mi> </mrow> <mrow> <mi>X</mi> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> represent strip convolutions that vary according to the sizes of features.</p>
Full article ">Figure 3
<p>The structure of MSA. Features, after being concatenated through atrous convolutions with varying dilation rates to enlarge the receptive filed, are then fed into the channel attention mechanisms.</p>
Full article ">Figure 4
<p>The structure of MCA. After stacking features of different receptive fields, perform channel attention and feed into the next layer.</p>
Full article ">Figure 5
<p>The structure of the MDM. After feature fusion, the channel count of the receptive field features is reduced. Up-sampling is performed using bilinear interpolation. The red feature maps will be visualized and discussed in <a href="#sec3dot3-remotesensing-16-01456" class="html-sec">Section 3.3</a>.</p>
Full article ">Figure 6
<p>The structure of the GFRM. Linear transformations are needed before and after matrix factorization in order to facilitate computation. The purpose of BPTT (Backpropagation Through Time) is to compute gradients, which are then handed over to the deep learning framework and optimized for fitting and iteration.</p>
Full article ">Figure 7
<p>The first and second rows, respectively, showcase some training data from the Cloud and Cloud Shadow Dataset and sample images from the WRC_WHU dataset. These primarily include (<b>a</b>) urban areas, (<b>b</b>) plant areas, (<b>c</b>) farmland areas, (<b>d</b>) desert areas, (<b>e</b>) water areas, and others.</p>
Full article ">Figure 8
<p>In the ablation study’s heat map, the color mapping from red to blue represents the gradual decay of the target category’s weight. White boxes indicate accurately extracted features, red boxes signify false detections, yellow boxes represent missed detections, and red circles denote significant noise.</p>
Full article ">Figure 9
<p>Visualization of features extracted from different paths.</p>
Full article ">Figure 10
<p>The prediction structure on the Cloud and Cloud Shadow Dataset, where the primary differences are highlighted in red and yellow. White represents clouds, gray denotes the background, and black indicates shadows.</p>
Full article ">Figure 11
<p>Prediction results on the HRC_WHU Dataset: white represents clouds, blue indicates accumulated snow, and gray denotes the background.</p>
Full article ">
24 pages, 8058 KiB  
Article
Spatiotemporal Analysis of Drought Characteristics and Their Impact on Vegetation and Crop Production in Rwanda
by Schadrack Niyonsenga, Anwar Eziz, Alishir Kurban, Xiuliang Yuan, Edovia Dufatanye Umwali, Hossein Azadi, Egide Hakorimana, Adeline Umugwaneza, Gift Donu Fidelis, Justin Nsanzabaganwa and Vincent Nzabarinda
Remote Sens. 2024, 16(8), 1455; https://doi.org/10.3390/rs16081455 - 20 Apr 2024
Cited by 6 | Viewed by 2355
Abstract
In recent years, Rwanda, especially its Eastern Province, has been contending with water shortages, primarily due to prolonged dry spells and restricted water sources. This situation poses a substantial threat to the country’s agriculture-based economy and food security. The impact may escalate with [...] Read more.
In recent years, Rwanda, especially its Eastern Province, has been contending with water shortages, primarily due to prolonged dry spells and restricted water sources. This situation poses a substantial threat to the country’s agriculture-based economy and food security. The impact may escalate with climate change, exacerbating the frequency and severity of droughts. However, there is a lack of comprehensive spatiotemporal analysis of meteorological and agricultural droughts, which is an urgent need for a nationwide assessment of the drought’s impact on vegetation and agriculture. Therefore, the study aimed to identify meteorological and agricultural droughts by employing the Standardized Precipitation Evapotranspiration Index (SPEI) and the Vegetation Health Index (VHI). VHI comprises the Vegetation Condition Index (VCI) and the Temperature Condition Index (TCI), both derived from the Normalized Difference Vegetation Index (NDVI) and Land Surface Temperature (LST). This study analyzed data from 31 meteorological stations spanning from 1983 to 2020, as well as remote sensing indices from 2001 to 2020, to assess the spatiotemporal patterns, characteristics, and adverse impact of droughts on vegetation and agriculture. The results showed that the years 2003, 2004, 2005, 2006, 2013, 2014, 2015, 2016, and 2017 were the most prolonged and severe for both meteorological and agricultural droughts, especially in the Southern Province and Eastern Province. These extremely dry conditions led to a decline in both vegetation and crop production in the country. It is recommended that policymakers engage in proactive drought mitigation activities, address climate change, and enforce water resource management policies in Rwanda. These actions are crucial to decreasing the risk of drought and its negative impact on both vegetation and crop production in Rwanda. Full article
(This article belongs to the Special Issue Satellite-Based Climate Change and Sustainability Studies)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area map, country province boundaries with 31 stations, and African country boundaries.</p>
Full article ">Figure 2
<p>Temporal patterns of annual mean maximum (<b>a</b>), minimum temperature in °C (<b>b</b>), and mean annual rainfall (mm), (<b>c</b>) for the four provinces and the capital city during the study period of 1983–2020.</p>
Full article ">Figure 3
<p>Temporal variation in SPEI-3 from 1983 to 2020 in the provinces of Rwanda: EEastern Province (<b>a</b>), Southern Province (<b>b</b>), Kigali City (<b>c</b>), Northern Province (<b>d</b>), and Western Province (<b>e</b>).</p>
Full article ">Figure 4
<p>The characteristics of drought considered are intensity (<b>a</b>), severity (<b>b</b>), frequency (<b>c</b>), and duration (<b>d</b>).</p>
Full article ">Figure 5
<p>Spatial-temporal variation in the TCI in the years 2001 and 2010.</p>
Full article ">Figure 6
<p>Spatial-temporal variation in the TCI in the years 2011 and 2020.</p>
Full article ">Figure 7
<p>Spatial-temporal variation in the VCI in the years 2001 and 2010.</p>
Full article ">Figure 8
<p>Spatial-temporal variation in the VCI in the years 2011 and 2020.</p>
Full article ">Figure 9
<p>Spatial-temporal variation in the vegetation health index (VHI) in the period between 2001 and 2010.</p>
Full article ">Figure 10
<p>Spatial-temporal variation in the vegetation health index (VHI) in the period between 2011 and 2020.</p>
Full article ">Figure 11
<p>Correlation between VHI and annual rainfall (2001 to 2020).</p>
Full article ">Figure 12
<p>Potato (<b>a</b>,<b>b</b>) and maize (<b>c</b>,<b>d</b>) production in the study area from 2001 to 2020, with a correlation between both crop productions and VHI.</p>
Full article ">
23 pages, 11818 KiB  
Article
GIS and Machine Learning Models Target Dynamic Settlement Patterns and Their Driving Mechanisms from the Neolithic to Bronze Age in the Northeastern Tibetan Plateau
by Gang Li, Jiajia Dong, Minglu Che, Xin Wang, Jing Fan and Guanghui Dong
Remote Sens. 2024, 16(8), 1454; https://doi.org/10.3390/rs16081454 - 19 Apr 2024
Viewed by 2208
Abstract
Traditional GIS-based statistical models are intended to extrapolate patterns of settlements and their interactions with the environment. They contribute significantly to our knowledge of past human–land relationships. Yet, these models are often criticized for their empiricism, lopsided specific factors, and for overlooking the [...] Read more.
Traditional GIS-based statistical models are intended to extrapolate patterns of settlements and their interactions with the environment. They contribute significantly to our knowledge of past human–land relationships. Yet, these models are often criticized for their empiricism, lopsided specific factors, and for overlooking the synergy between variables. Though largely untested, machine learning and artificial intelligence methods have the potential to overcome these shortcomings comprehensively and objectively. The northeastern Tibetan Plateau (NETP) is characterized by diverse environments and significant changes to the social system from the Neolithic to Bronze Age. In this study, this area serves as a representative case for assessing the complex relationships between settlement locations and geographic environments, taking full advantages of these new models. We have explored a novel modeling case by employing GIS and random forests to consider multiple factors, including terrain, vegetation, soil, climate, hydrology, and land suitability, to construct classification models identifying environmental variation across different cultural periods. The model exhibited strong performance and a high archaeological prediction value. Potential living maps were generated for each cultural stage, revealing distinct environmental selection strategies from the Neolithic to Bronze Age. The key environmental parameters of elevation, climate, soil erosion, and cultivated land suitability were calculated with high weights, influencing human environmental decisions synergistically. Furthermore, we conducted a quantitative analysis of temporal dynamics in climate and subsistence to understand driving mechanisms behind environmental strategies. These findings suggest that past human environmental strategies were based on the comprehensive consideration of various factors, coupled with their social economic scenario. Such subsistence-oriented activities supported human beings in overcoming elevation limitation, and thus allowed them to inhabit wider pastoral areas. This study showcases the potential of machine learning in predicting archaeological probabilities and in interpreting the environmental influence on settlement patterns. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The study area and archaeological site distribution across different cultural periods from the Neolithic to Bronze Age.</p>
Full article ">Figure 2
<p>Flow chart of methodology.</p>
Full article ">Figure 3
<p>Site location validation using known site locations.</p>
Full article ">Figure 4
<p>The confusion matrix, generated with OOB data (<b>a</b>), test set data (<b>b</b>), and mean of 10-fold CV from a random forests model (<b>c</b>); geographic variables ranked according to mean Gini decrease (<b>d</b>).</p>
Full article ">Figure 5
<p>Prediction distribution probabilities for whole sites/non-sites (<b>a</b>), YS-MJY cultural sites, QJ cultural sites, and KXN cultural sites (<b>b</b>).</p>
Full article ">Figure 6
<p>Single factor comparison of several important variables.</p>
Full article ">Figure 7
<p>Classification tree for different cultural sites and non-sites. Each leaf node displays the predicted category, the proportion of each class within this leaf node, and the proportion of all samples within this leaf node.</p>
Full article ">Figure 8
<p>SOM grid map of all 16 variables points for (<b>a</b>) and pie for (<b>b</b>); distinct colors signify cultures. SOM grid map of 4 important variables (<b>c</b>), and the corresponding code map; for each cell, the sector area represents the weight of the variable and different colors represent environmental factors (<b>d</b>). The left black line represents the environment boundaries for distinguishing sites/non-sites and the right delineates boundary of the KXN culture versus others.</p>
Full article ">Figure 9
<p>Climate change during different cultural periods in the TP. Temperature reconstruction records made use of a simulated mean annual temperature (MAT) in the TP [<a href="#B110-remotesensing-16-01454" class="html-bibr">110</a>] and mean annual precipitation (MAP) made use of tree-ring records in Delingha [<a href="#B109-remotesensing-16-01454" class="html-bibr">109</a>]. For the cultural duration range, <sup>14</sup>C data that we collected were used.</p>
Full article ">Figure 10
<p>Subsistence strategy shifts during different cultural stages. PCA scores plot of animal resource utilization (<b>a</b>), SOM grid map and the corresponding code map for crops utilization. The black line divide sites into four different subsistence clusters (<b>b</b>).</p>
Full article ">
18 pages, 3220 KiB  
Article
Time–Frequency Signal Integrity Monitoring Algorithm Based on Temperature Compensation Frequency Bias Combination Model
by Yu Guo, Zongnan Li, Hang Gong, Jing Peng and Gang Ou
Remote Sens. 2024, 16(8), 1453; https://doi.org/10.3390/rs16081453 - 19 Apr 2024
Cited by 1 | Viewed by 1022
Abstract
To ensure the long-term stable and uninterrupted service of satellite navigation systems, the robustness and reliability of time–frequency systems are crucial. Integrity monitoring is an effective method to enhance the robustness and reliability of time–frequency systems. Time–frequency signals are fundamental for integrity monitoring, [...] Read more.
To ensure the long-term stable and uninterrupted service of satellite navigation systems, the robustness and reliability of time–frequency systems are crucial. Integrity monitoring is an effective method to enhance the robustness and reliability of time–frequency systems. Time–frequency signals are fundamental for integrity monitoring, with their time differences and frequency biases serving as essential indicators. These indicators are influenced by the inherent characteristics of the time–frequency signals, as well as the links and equipment they traverse. Meanwhile, existing research primarily focuses on only monitoring the integrity of the time–frequency signals’ output by the atomic clock group, neglecting the integrity monitoring of the time–frequency signals generated and distributed by the time–frequency signal generation and distribution subsystem. This paper introduces a time–frequency signal integrity monitoring algorithm based on the temperature compensation frequency bias combination model. By analyzing the characteristics of time difference measurements, constructing the temperature compensation frequency bias combination model, and extracting and monitoring noise and frequency bias features from the time difference measurements, the algorithm achieves comprehensive time–frequency signal integrity monitoring. Experimental results demonstrate that the algorithm can effectively detect, identify, and alert users to time–frequency signal faults. Additionally, the model and the integrity monitoring parameters developed in this paper exhibit high adaptability, making them directly applicable to the integrity monitoring of time–frequency signals across various links. Compared with traditional monitoring algorithms, the algorithm proposed in this paper greatly improves the effectiveness, adaptability, and real-time performance of time–frequency signal integrity monitoring. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Architecture of a typical time–frequency system.</p>
Full article ">Figure 2
<p>Algorithm flowchart.</p>
Full article ">Figure 3
<p>Curve of time–frequency signal time difference measurement.</p>
Full article ">Figure 4
<p>Curve of ambient temperature change.</p>
Full article ">Figure 5
<p>The noise characteristics of the time–frequency signal time difference.</p>
Full article ">Figure 6
<p>Curve of the mean and RMSE of forecast bias.</p>
Full article ">Figure 7
<p>Curve of the PFA of various thresholds. (<b>a</b>) Curve of the PFA of the threshold of the mean of forecast bias; (<b>b</b>) Curve of the PFA of the threshold of RMSE; (<b>c</b>) Curve of the PFA of the threshold of frequency bias.</p>
Full article ">Figure 8
<p>Curve of the PMD of various thresholds. (<b>a</b>) Curve of the PMD of the threshold of the mean of forecast bias; (<b>b</b>) Curve of the PMD of the threshold of RMSE; (<b>c</b>) Curve of the PMD of the threshold of frequency bias.</p>
Full article ">Figure 9
<p>Simulation of time–frequency signal fault. (<b>a</b>) Simulation of phase transition fault; (<b>b</b>) Simulation of noise deterioration fault; (<b>c</b>) Simulation of frequency transition fault.</p>
Full article ">Figure 10
<p>The process of the traditional monitoring algorithm.</p>
Full article ">
19 pages, 10542 KiB  
Article
InSAR Digital Elevation Model Void-Filling Method Based on Incorporating Elevation Outlier Detection
by Zhi Hu, Rong Gui, Jun Hu, Haiqiang Fu, Yibo Yuan, Kun Jiang and Liqun Liu
Remote Sens. 2024, 16(8), 1452; https://doi.org/10.3390/rs16081452 - 19 Apr 2024
Cited by 3 | Viewed by 1390
Abstract
Accurate and complete digital elevation models (DEMs) play an important fundamental role in geospatial analysis, supporting various engineering applications, human activities, and scientific research. Interferometric synthetic aperture radar (InSAR) plays an increasingly important role in DEM generation. Nonetheless, owing to its inherent characteristics, [...] Read more.
Accurate and complete digital elevation models (DEMs) play an important fundamental role in geospatial analysis, supporting various engineering applications, human activities, and scientific research. Interferometric synthetic aperture radar (InSAR) plays an increasingly important role in DEM generation. Nonetheless, owing to its inherent characteristics, gaps often appear in regions marked by significant topographical fluctuations, necessitating an extra void-filling process. Traditional void-filling methods have operated directly on preexisting data, succeeding in relatively flat terrain. When facing mountainous regions, there will always be gross errors in elevation values. Regrettably, conventional methods have often disregarded this vital consideration. To this end, this research proposes a DEM void-filling method based on incorporating elevation outlier detection. It accounts for the detection and removal of elevation outliers, thereby mitigating the shortcomings of existing methods and ensuring robust DEM restoration in mountainous terrains. Experiments were conducted to validate the method applicability using TanDEM-X data from Sichuan, China, Hebei, China, and Oregon, America. The results underscore the superiority of the proposed method. Three traditional methods are selected for comparison. The proposed method has different degrees of improvement in filling accuracy, depending on the void status of the local terrain. Compared with the delta surface fill (DSF) method, the root mean squared error (RMSE) of the filling results has improved by 7.87% to 51.87%. The qualitative and quantitative experiments demonstrate that the proposed method is promising for large-scale DEM void-filling tasks. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Void-filling flow chart of the proposed method based on incorporating elevation outlier detection.</p>
Full article ">Figure 2
<p>DEM outlier example. (<b>a</b>) the actual elevation; (<b>b</b>) the residual map of (<b>a</b>) compared to the reference DEM.</p>
Full article ">Figure 3
<p>Statistical outlier detection scheme. (<b>a</b>) An example of a statistical histogram of elevation differences between an InSAR-DEM and an external DEM; elevation mean difference μ and standard deviation σ are listed; (<b>b</b>) A schematic diagram of outlier distribution under normal distribution, data outside the two red dashed lines represent outliers.</p>
Full article ">Figure 4
<p>Dilation and corrosion morphology operations. Examples are shown before the operation, after the dilation operation, and after the dilation and corrosion operation. Green represents the initial void range, yellow represents the void range added by the dilation operation, blue represents the void range retained after the corrosion operation, and red represents the structural elements of the morphological operation.</p>
Full article ">Figure 5
<p>DSF implementation process. Various colors: light green for DEM data, red for data voids, and pink for the delta surface. (<b>a</b>) is raw DEMs; (<b>b</b>) is external DEMs; (<b>c</b>) = (<b>b</b>) − (<b>a</b>) is the delta surface corresponding to step (I); (<b>d</b>) is the delta surface center filled in large voids by mean value, corresponding to step (II); (<b>e</b>) is the delta surface filled and interpolated entirely, corresponding to step (III); (<b>f</b>) = (<b>b</b>) − (<b>e</b>) is the filled DEM corresponding to step (IV). Red represents void pixels, pink represents elevation difference pixels, and green represents elevation pixels.</p>
Full article ">Figure 6
<p>The location of the study areas and data coverage (red rectangular box). A, B, and C are data coverage maps of the three study areas of Sichuan, Hebei, and Oregon respectively.</p>
Full article ">Figure 7
<p>Raw DEMs (<b>a</b>,<b>c</b>,<b>e</b>) generated by TanDEM-X and reference DEMs (<b>b</b>,<b>d</b>,<b>f</b>). (<b>a</b>,<b>b</b>) correspond to area A, (<b>c</b>,<b>d</b>) correspond to area B, and (<b>e</b>,<b>f</b>) correspond to area C.</p>
Full article ">Figure 8
<p>Two small areas were enlarged in area A, area B and area C, named area 1, area 2, area 3, area 4, area 5, and area 6. These areas, denoted as raw voids and voids after elevation outlier detection, will be used in subsequent experiments. The white background in the enlarged figure represents voids.</p>
Full article ">Figure 9
<p>Comparison of the filling results of six different areas using different methods. The white background in the figure represents voids.</p>
Full article ">Figure 10
<p>Enlarged view of the elevation difference of DEMs filled through different methods.</p>
Full article ">Figure 11
<p>Profile performance of enlarged figures of the six areas using different void-filling methods. The location of the section lines is indicated by purple and yellow line segments.</p>
Full article ">
19 pages, 20471 KiB  
Article
Combining Multitemporal Optical and Radar Satellite Data for Mapping the Tatra Mountains Non-Forest Plant Communities
by Marcin Kluczek, Bogdan Zagajewski and Marlena Kycko
Remote Sens. 2024, 16(8), 1451; https://doi.org/10.3390/rs16081451 - 19 Apr 2024
Cited by 7 | Viewed by 1596
Abstract
Climate change is significantly affecting mountain plant communities, causing dynamic alterations in species composition as well as spatial distribution. This raises the need for constant monitoring. The Tatra Mountains are the highest range of the Carpathians which are considered biodiversity hotspots in Central [...] Read more.
Climate change is significantly affecting mountain plant communities, causing dynamic alterations in species composition as well as spatial distribution. This raises the need for constant monitoring. The Tatra Mountains are the highest range of the Carpathians which are considered biodiversity hotspots in Central Europe. For this purpose, microwave Sentinel-1 and optical multi-temporal Sentinel-2 data, topographic derivatives, and iterative machine learning methods incorporating classifiers random forest (RF), support vector machines (SVMs), and XGBoost (XGB) were used for the identification of thirteen non-forest plant communities (various types of alpine grasslands, shrublands, herbaceous heaths, mountain hay meadows, rocks, and scree communities). Different scenarios were tested to identify the most important variables, retrieval periods, and spectral bands. The overall accuracy results for the individual algorithms reached RF (0.83–0.96), SVM (0.87–0.93), and lower results for XGBoost (0.69–0.82). The best combination, which included a fusion of Sentinel-1, Sentinel-2, and topographic data, achieved F1-scores for classes in the range of 0.73–0.97 (RF) and 0.66–0.95 (SVM). The inclusion of topographic variables resulted in an improvement in F1-scores for Sentinel-2 data by one–four percent points and Sentinel-1 data by 1%–9%. For spectral bands, the Sentinel-2 10 m resolution bands B4, B3, and B2 showed the highest mean decrease accuracy. The final result is the first comprehensive map of non-forest vegetation for the Tatra Mountains area. Full article
(This article belongs to the Special Issue Remote Sensing for Mountain Ecosystems II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area the core and buffer zone of Tatra Transboundary Biosphere Reserve (red box on the left image; satellite image for Poland area: Sentinel-2 RGB median composition for June–August 2023; generated using Google Earth Engine; satellite image for the Tatras area: Sentinel-2 RGB 2020-08-22).</p>
Full article ">Figure 2
<p>Species adaptations of physiological and morphological processes allow the identification of spectral features recorded in multitemporal and multispectral satellite data of individual plant communities (view from the Kasprowy Wierch (location: 49°13′57″N, 19°58′53″E) towards the Western Tatras, photo credit: Bogdan Zagajewski).</p>
Full article ">Figure 3
<p>Sentinel-2 acquisition dates with the distribution of mean cloud cover based on the metadata of all available satellite scenes.</p>
Full article ">Figure 4
<p>Iterative classification method scheme.</p>
Full article ">Figure 5
<p>Overall accuracy results of different scenarios (Explanation: S1–Sentinel-1, S2–Sentinel-2, TF–topographic features including digital elevation model, slope and aspect maps).</p>
Full article ">Figure 6
<p>Mean F1-score for classes based on acquisition date based on 100 iterations (explanation: S1—Sentinel-1, S2—Sentinel-2, TF—topographic features including digital elevation model, slope and aspect maps, ASC—ascending orbit, DSC—descending orbit).</p>
Full article ">Figure 7
<p>Variation of the F1-score, producer accuracy (PA), and user accuracy (UA) values for classes based on 100 iterations using random forest algorithm and combined Sentinel-1 with Sentinel-2 and topographic features. For each box plot, the lower quartile (Q1) is the lower edge of the box, the median is the bar in the box, and the upper quartile (Q3) is the upper edge of the box.</p>
Full article ">Figure 8
<p>Variable importance on Sentinel-2 bands (mean decrease accuracy: digital elevation model = 18.0, slope = 9.7, aspect = 3.6; mean decrease Gini: digital elevation model = 9.7, slope = 2.9, aspect = 0.8).</p>
Full article ">Figure 9
<p>Comparison of obtained classification maps based on random forest and support vector machines with high-resolution orthophoto map (geographic coordinates represent centroids of polygons).</p>
Full article ">Figure 10
<p>Ridgeline plots of mountain vegetation communities classification results (frequency) in relation to altitude based on the digital elevation model.</p>
Full article ">Figure A1
<p>Map of occurrence of non-forest communities in Tatra Mountains. Classification based on multi-temporal Sentinel-2 and Sentinel-1 data combined with topographic derivatives (digital elevation model, and slope and aspect maps) and random forest classifier.</p>
Full article ">
21 pages, 3492 KiB  
Article
GLUENet: An Efficient Network for Remote Sensing Image Dehazing with Gated Linear Units and Efficient Channel Attention
by Jiahao Fang, Xing Wang, Yujie Li, Xuefeng Zhang, Bingxian Zhang and Martin Gade
Remote Sens. 2024, 16(8), 1450; https://doi.org/10.3390/rs16081450 - 19 Apr 2024
Cited by 1 | Viewed by 1317
Abstract
Dehazing individual remote sensing (RS) images is an effective approach to enhance the quality of hazy remote sensing imagery. However, current dehazing methods exhibit substantial systemic and computational complexity. Such complexity not only hampers the straightforward analysis and comparison of these methods but [...] Read more.
Dehazing individual remote sensing (RS) images is an effective approach to enhance the quality of hazy remote sensing imagery. However, current dehazing methods exhibit substantial systemic and computational complexity. Such complexity not only hampers the straightforward analysis and comparison of these methods but also undermines their practical effectiveness on actual data, attributed to the overtraining and overfitting of model parameters. To mitigate these issues, we introduce a novel dehazing network for non-uniformly hazy RS images: GLUENet, designed for both lightweightness and computational efficiency. Our approach commences with the implementation of the classical U-Net, integrated with both local and global residuals, establishing a robust base for the extraction of multi-scale information. Subsequently, we construct basic convolutional blocks using gated linear units and efficient channel attention, incorporating depth-separable convolutional layers to efficiently aggregate spatial information and transform features. Additionally, we introduce a fusion block based on efficient channel attention, facilitating the fusion of information from different stages in both encoding and decoding to enhance the recovery of texture details. GLUENet’s efficacy was evaluated using both synthetic and real remote sensing dehazing datasets, providing a comprehensive assessment of its performance. The experimental results demonstrate that GLUENet’s performance is on par with state-of-the-art (SOTA) methods and surpasses the SOTA methods on our proposed real remote sensing dataset. Our method on the real remote sensing dehazing dataset has an improvement of 0.31 dB for the PSNR metric and 0.13 for the SSIM metric, and the number of parameters and computations of the model are much lower than the optimal method. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>A demonstration of our method’s outcomes compared to others. First row: synthesis haze. Second row: real haze. (<b>a</b>) Hazy images. (<b>b</b>) DCP. (<b>c</b>) AOD-Net. (<b>d</b>) DehazeFormer-B. (<b>e</b>) GLUENet (ours).</p>
Full article ">Figure 2
<p>Our proposed GLUENet is a simple U-Net variant. Compared to the conventional U-Net architecture, GLUENet uses GLUE blocks and an ECA Fusion module to replace the original convolutional blocks and concatenation fusion layers.</p>
Full article ">Figure 3
<p>Structure of the GLUE block.</p>
Full article ">Figure 4
<p>Structure of the Efficient Channel Attention.</p>
Full article ">Figure 5
<p>Structure of the Attention-Guided Fusion.</p>
Full article ">Figure 6
<p>Global distribution of the source images of our RRSH dataset.</p>
Full article ">Figure 7
<p>Qualitative comparisons on RSHaze. (<b>a</b>) Synthetic haze images. (<b>b</b>) DCP. (<b>c</b>) AOD-Net. (<b>d</b>) GCANet. (<b>e</b>) FCTF-Net. (<b>f</b>) DehazeFormer. (<b>g</b>) GLUENet (ours). (<b>h</b>) Ground-truth.</p>
Full article ">Figure 8
<p>Qualitative comparisons on RRSH. (<b>a</b>) Real haze images. (<b>b</b>) DCP. (<b>c</b>) AOD-Net. (<b>d</b>) GCANet. (<b>e</b>) FCTF-Net. (<b>f</b>) DehazeFormer. (<b>g</b>) GLUENet (ours). (<b>h</b>) Clear images.</p>
Full article ">Figure 9
<p>Qualitative comparisons on real RS image. (<b>a</b>) Real haze images. (<b>b</b>) GCANet. (<b>c</b>) FCTF-Net. (<b>d</b>) DehazeFormer. (<b>e</b>) GLUENet (ours). The box corresponds to the area where the detail on the right is in the large-scale remote sensing image on the left.</p>
Full article ">
22 pages, 10413 KiB  
Article
Bridging Domains and Resolutions: Deep Learning-Based Land Cover Mapping without Matched Labels
by Shuyi Cao, Yubin Tang, Enping Yan, Jiawei Jiang and Dengkui Mo
Remote Sens. 2024, 16(8), 1449; https://doi.org/10.3390/rs16081449 - 19 Apr 2024
Viewed by 1585
Abstract
High-resolution land cover mapping is crucial in various disciplines but is often hindered by the lack of accurately matched labels. Our study introduces an innovative deep learning methodology for effective land cover mapping, independent of matched labels. The approach comprises three main components: [...] Read more.
High-resolution land cover mapping is crucial in various disciplines but is often hindered by the lack of accurately matched labels. Our study introduces an innovative deep learning methodology for effective land cover mapping, independent of matched labels. The approach comprises three main components: (1) An advanced fully convolutional neural network, augmented with super-resolution features, to refine labels; (2) The application of an instance-batch normalization network (IBN), leveraging these enhanced labels from the source domain, to generate 2-m resolution land cover maps for test sites in the target domain; (3) Noise assessment tests to evaluate the impact of varying noise levels on the model’s mapping accuracy using external labels. The model achieved an overall accuracy of 83.40% in the target domain using endogenous super-resolution labels. In contrast, employing exogenous, high-precision labels from the National Land Cover Database in the source domain led to a notable accuracy increase of 2.55%, reaching 85.48%. This improvement highlights the model’s enhanced generalizability and performance during domain shifts, attributed significantly to the IBN layer. Our findings reveal that, despite the absence of native high-precision labels, the utilization of high-quality external labels can substantially benefit the development of precise land cover mapping, underscoring their potential in scenarios with unmatched labels. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Procedural overview of the experimental process.</p>
Full article ">Figure 2
<p>Geospatial distribution of sampling sites for experimental data acquisition.</p>
Full article ">Figure 3
<p>Visual representation of label super-resolution. (<b>a</b>) is the structure diagram of FCN; (<b>b</b>,<b>c</b>) are schematic diagrams of the calculation principle of the joint distribution function.</p>
Full article ">Figure 4
<p>Architecture of the improved IBN-Net. (<b>a</b>) is the structure diagram of IBN-Net; (<b>b</b>) is the basic composition of the IBN residual block; (<b>c</b>) is the basic composition of the original residual block.</p>
Full article ">Figure 5
<p>Confusion matrices before and after label SR. (<b>a</b>) is the comparing confusion matrices for different land cover products before and after SR in source domain. (<b>b</b>) is comparing confusion matrices for different land cover products before and after SR in target domain. (<b>c</b>) is the improved confusion matrices after label SR depicting the transformation from LR label to SR. For (<b>a</b>,<b>b</b>), values on the diagonal represent the number of pixels that are correctly labeled. The improved matrix is obtained by subtracting (<b>a</b>) from (<b>b</b>).</p>
Full article ">Figure 6
<p>Visualized results of labeled SR. (<b>a</b>) is the visual comparison for different land cover products before and after SR in source domain; (<b>b</b>) is the visual comparison for different land cover products before and after SR in target domain.</p>
Full article ">Figure 7
<p>OA and IoU scores of each category for different labels. (<b>a</b>) shows the OA of LR and SR labels; (<b>b</b>) shows the IoU score of LR and SR labels in various land cover categories.</p>
Full article ">Figure 8
<p>Qualitative comparison of source domain mapping results in the sample area. (<b>a</b>–<b>h</b>) show the predictions of the UNRU and UERU in four scenarios.</p>
Full article ">Figure 9
<p>Qualitative comparison of target domain mapping results in the sample area. (<b>a</b>–<b>t</b>) show the predictions of the UNRC, UERC, UEIC and CEIC in four scenarios.</p>
Full article ">Figure 10
<p>Heatmap of average IOU score of all land cover classes.</p>
Full article ">Figure 11
<p>Statistical histograms of mean values of different experimental evaluation indicators.</p>
Full article ">Figure 12
<p>Statistical Comparison of Global Products with UNIC. The yellow, orange, blue and green rectangles represent the details of the four scenes in the sample area respectively.</p>
Full article ">Figure 13
<p>The impact of introduced exogenous label noise on land cover mapping (L represents the intersection of the polyline of SR label and mapping result).</p>
Full article ">
25 pages, 7434 KiB  
Article
Properties of Cirrus Cloud Observed over Koror, Palau (7.3°N, 134.5°E), in Tropical Western Pacific Region
by Xiaoyu Sun, Christoph Ritter, Katrin Müller, Mathias Palm, Denghui Ji, Wilfried Ruhe, Ingo Beninga, Sharon Patris and Justus Notholt
Remote Sens. 2024, 16(8), 1448; https://doi.org/10.3390/rs16081448 - 19 Apr 2024
Cited by 1 | Viewed by 1217
Abstract
This study presented an analysis of the geometric and optical properties of cirrus clouds with data produced by Compact Cloud-Aerosol Lidar (ComCAL) over Koror, Palau (7.3°N, 134.5°E), in the Tropical Western Pacific region. The lidar measurement dataset covers April 2018 to May 2019 [...] Read more.
This study presented an analysis of the geometric and optical properties of cirrus clouds with data produced by Compact Cloud-Aerosol Lidar (ComCAL) over Koror, Palau (7.3°N, 134.5°E), in the Tropical Western Pacific region. The lidar measurement dataset covers April 2018 to May 2019 and includes data collected during March, July and August 2022. The results show that cirrus clouds occur approximately 47.9% of the lidar sampling time, predominantly between altitudes of 15 and 18 km. Seasonal variations in cirrus top height closely align with those of the cold point tropopause. Most cirrus clouds exhibit low cloud optical depth (COD < 0.1), with an annual mean depolarization ratio of 31 ± 19%. Convective-forming cirrus clouds during the summer monsoon season exhibit a larger size by notably lower values in terms of color ratio. Extremely thin cirrus clouds (COD < 0.005) constituting 1.6% of total cirrus occurrences are frequently observed at 1–2 km above the cold point, particularly during winter and summer, suggesting significant stratosphere–troposphere exchange. The coldest and highest tropopause over Palau is persistent during winter, and related to the pathway of tropospheric air entering the stratosphere through the cold trap. In summer, the extremely thin cirrus above the cold point is likely correlated with equatorial Kelvin waves induced by western Pacific monsoon convection. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Maps of the location of Koror, Palau. (<b>a</b>) The site is located in Koror (marked on the left side of the map), the main commercial center of the Republic of Palau, an island country in the Micronesia subregion of Oceania in the western Pacific. (<b>b</b>) Sea Surface temperature (SST, in °C) in the tropical region averaged from 2015 to 2022. The black line shows the isothermal line of 28.5 °C. The location of Palau is marked in the plot. The SST data are obtained from the ocean surface diagnostic data collection in Modern-Era Retrospective Analysis for Research and Applications version 2 (MERRA-2) [<a href="#B41-remotesensing-16-01448" class="html-bibr">41</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic diagram of the ComCAL system. (<b>b</b>) The laser and top part of the lab container of the Palau Atmospheric Observatory (PAO) during the night. (<b>c</b>) Laser window. (<b>d</b>) Parabolic mirror with 400 mm aperture and 1200 mm focal length. (<b>e</b>) Picture of the ComCAL system inside the lab container.</p>
Full article ">Figure 3
<p>(<b>a</b>) Schematic diagram and (<b>b</b>) photo of the detector optics: 1: 90° off-axis mirror, 2, 3, 4, 7: dichroic mirrors, 5: detector for 1064 nm signal (interference filter, lens, APD), 6: rotating Glan–Taylor prism, 8, 9: Detectors for 532 nm and 355 nm signals (interference filters, lens, PMT), 10: Detector for 387 nm signal (interference filter, lens, PMT).</p>
Full article ">Figure 4
<p>Height–time plot of backscatter ratio (BSR) at 532 nm with cirrus cloud base <math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>b</mi> <mi>a</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> </semantics></math> (blue dots) and top <math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>t</mi> <mi>o</mi> <mi>p</mi> </mrow> </msub> </semantics></math> (orange dots) derived by the wavelet covariance transform (WCT) method. Lidar observations using ComCAL over Koror, Palau on (<b>a</b>) 5 August 2022, (<b>b</b>) 25 August 2022, and (<b>c</b>) 17 August 2022, represent three different scenarios of cloud geometric thickness ranging from thick to thin.</p>
Full article ">Figure 5
<p>Schematic of the data processing used by the ComCAL to obtain cirrus properties.</p>
Full article ">Figure 6
<p>Cirrus monthly percentage occurrence (PO) in Palau by ComCAL. The numbers in <a href="#remotesensing-16-01448-f006" class="html-fig">Figure 6</a> refer to the detected cirrus layers. The blue and orange square with 1-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> shows the height of the cloud base and the top of the cloud base. The height of cold point tropopause (CPT, brown circles), level of neutral buoyancy (LNB, pink “X”s), and level of the minimum stability (LMS, cyan triangles) averaged each month was calculated by the meteorology profiles from radiosondes by the National Weather Service of Palau. See details of CPT and LMS in the <a href="#app4-remotesensing-16-01448" class="html-app">Appendix D</a>. The monthly occurrence of cirrus clouds is calculated from the data solely collected within each month of the year, as shown by the x ticks.</p>
Full article ">Figure 7
<p>Percentage occurrence (PO) of cirrus in Palau during winter (December–February, red line), spring (March–May, yellow line), summer (July–August, green line), Autumn (October–November, brown line), and yearly averaged (dashed black line). The annual mean of the cold point tropopause (CPT gray dashed line), the level of neutral buoyancy (LNB, dotted line), and the level of minimum stability (LMS, dotted dashed line) are shown. For the specific months, years and seasonal divisions of the dataset, please see <a href="#remotesensing-16-01448-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 8
<p>Frequency of occurrence (FOC) distributions of the seasonal and annual geometrical thickness (GT) of cloud (<b>a</b>–<b>e</b>), <math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>b</mi> <mi>a</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> </semantics></math> (<b>f</b>–<b>j</b>) and <math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>t</mi> <mi>o</mi> <mi>p</mi> </mrow> </msub> </semantics></math> (<b>k</b>–<b>o</b>). The bin size is 1 km. It should be noted that the FOC values add up to 1.0, and for the sake of aesthetics of the picture, only the maximum value of FOC is displayed in the y axis of this figure, so the vertical coordinate of the picture is less than 1.0.</p>
Full article ">Figure 9
<p>FOC distributions of the seasonal and annual cloud base temperature (<b>a</b>–<b>e</b>), cloud top temperature (<b>f</b>–<b>j</b>), and mid-cloud temperature (<b>k</b>–<b>o</b>). The bin size is 5 °C.</p>
Full article ">Figure 10
<p>FOC distributions of the seasonal and annual COD (<b>a</b>–<b>e</b>) particle depolarization ratio (<b>f</b>–<b>g</b>) and color ratio (<b>k</b>–<b>o</b>). The bin size for (<b>a</b>–<b>e</b>) is 0.2, for (<b>f</b>–<b>j</b>) is 0.1 and for (<b>f</b>–<b>j</b>) is 0.25.</p>
Full article ">Figure 11
<p>(<b>a</b>) Annual averaged FOC distribution of the cloud base height with different CODs as indicated by different colors and markers. The dashed line shows the distribution of the height of the CPT. (<b>b</b>) FOC of seasonal and annual ETTCi*, SVC, thin, and thick cloud. Since the FOC of ETTCi is very low compared to several other types of clouds, it is difficult to see its exact frequency of occurrence in the figure; please see <a href="#remotesensing-16-01448-t003" class="html-table">Table 3</a> for details of its value. Note that ETTCi [<a href="#B9-remotesensing-16-01448" class="html-bibr">9</a>] is a customized sub-classification of cirrus with a COD defined as less than 0.005, which belongs to SVC. The sum of the fraction of SVC, thin, and thick cirrus equals 1.0, which should not include the fraction of ETTCi.</p>
Full article ">Figure 12
<p>(<b>a</b>) Time series of the daily averaged mid-cloud height of sub-visible cirrus (SVC, shown by green triangles), extremely thin tropical cirrus (ETTCi, shown by black “X”s) and cold point tropopause (CPT, shown by black circles). (<b>b</b>) Time series of the distance from the cold point tropopause (CPT) to the daily averaged mid-cloud height of SVC and ETTCi cirrus cloud. If the distance is a negative value, the cloud layer is higher than the CPT, and vice versa.</p>
Full article ">Figure A1
<p>Monthly measurement hours of the lidar observations and cirrus cloud observations. The number in the upper plot refers to the average PO of cirrus out of the total measurement time of the lidar system in each month.</p>
Full article ">Figure A2
<p>An example case for the WCT method: (<b>a</b>) BSR at 532 nm and (<b>b</b>) WCT value as a function of height in 8:00 UTC. The squares show the height of the cloud base and the triangles show the height of the cloud top detected by the WCT method. (<b>c</b>) BSR at 532 nm as a function of time and height measured by lidar on 6 December 2018. The dashed line shows the time in 8:00 UTC.</p>
Full article ">Figure A3
<p>The increase in the lidar signal as a function of COD between 10.8 and 16.8 km on 2 September 2022; with 600 s temporal resolution, there are 25-time steps for analysis, and the latter 3-time steps have a cloud are so thick that the lidar signal is noise and was screened out based on the scheme as mentioned in the <a href="#sec2dot3dot1-remotesensing-16-01448" class="html-sec">Section 2.3.1</a>. Different colors show two examples for low COD (&lt;1) and high COD about 2.</p>
Full article ">Figure A4
<p>Daily and monthly averaged (<b>a</b>) height (km) and (<b>b</b>) temperature (K) of cold point tropopause (CPT, red) and level of neutral buoyancy (LNB, blue) in 2018, 2019, and 2022 for each day (not only for the day lidar was operating). Note that the y axis of (<b>b</b>) for LNB is the potential temperature (K) and the double y axis is for CPT in different units: K and °C. The monthly mean and with 1-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> standard error of the data are shown by a red line and error bar, respectively. Daily data are shown by the square triangle and square for CDT and LNB, respectively. The meteorology profiles are from radio soundings by the National Weather Service of Palau, see <a href="#sec2dot2dot2-remotesensing-16-01448" class="html-sec">Section 2.2.2</a>.</p>
Full article ">Figure A5
<p>Similar plot of <a href="#remotesensing-16-01448-f0A4" class="html-fig">Figure A4</a> but for cold point tropopause (CPT, red) and the level of minimum stability (LMS, blue).</p>
Full article ">
16 pages, 7922 KiB  
Article
Canopy-Level Spectral Variation and Classification of Diverse Crop Species with Fine Spatial Resolution Imaging Spectroscopy
by Jie Dai, Marcel König, Elahe Jamalinia, Kelly L. Hondula, Nicholas R. Vaughn, Joseph Heckler and Gregory P. Asner
Remote Sens. 2024, 16(8), 1447; https://doi.org/10.3390/rs16081447 - 19 Apr 2024
Viewed by 1683
Abstract
With the increasing availability and volume of remote sensing data, imaging spectroscopy is an expanding tool for agricultural studies. One of the fundamental applications in agricultural research is crop mapping and classification. Previous studies have mostly focused at local to regional scales, and [...] Read more.
With the increasing availability and volume of remote sensing data, imaging spectroscopy is an expanding tool for agricultural studies. One of the fundamental applications in agricultural research is crop mapping and classification. Previous studies have mostly focused at local to regional scales, and classifications were usually performed for a limited number of crop types. Leveraging fine spatial resolution (60 cm) imaging spectroscopy data collected by the Global Airborne Observatory (GAO), we investigated canopy-level spectral variations in 16 crop species from different agricultural regions in the U.S. Inter-specific differences were quantified through principal component analysis (PCA) of crop spectra and their Euclidean distances in the PC space. We also classified the crop species using support vector machines (SVM), demonstrating high classification accuracy with a test kappa of 0.97. A separate test with an independent dataset also returned high accuracy (kappa = 0.95). Classification using full reflectance spectral data (320 bands) and selected optimal wavebands from the literature resulted in similar classification accuracies. We demonstrated that classification involving diverse crop species is achievable, and we encourage further testing based on moderate spatial resolution imaging spectrometer data. Full article
Show Figures

Figure 1

Figure 1
<p>Study sites in the contiguous United States with the 2022 national cultivated cropland data layer (<a href="https://www.nass.usda.gov/Research_and_Science/Cropland/Release/index.php" target="_blank">https://www.nass.usda.gov/Research_and_Science/Cropland/Release/index.php</a>; last accessed on 17 April 2024) in background. The five major U.S. farming regions were mapped and labeled [<a href="#B27-remotesensing-16-01447" class="html-bibr">27</a>]. Crop symbols represented the species involved in this study and were used with permission from the University of Maryland Center for Environmental Science (UMCES) Integration and Application Network (IAN) Symbol Library (<a href="http://ian.umces.edu/media-library/symbols" target="_blank">http://ian.umces.edu/media-library/symbols</a>; accessed on 17 April 2024). Detailed information about each site and image acquisition dates can be found in <a href="#remotesensing-16-01447-t0A1" class="html-table">Table A1</a>.</p>
Full article ">Figure 2
<p>An example green vegetation spectrum (green line) overlayed with optimal wavelengths identified by previous studies [<a href="#B22-remotesensing-16-01447" class="html-bibr">22</a>,<a href="#B37-remotesensing-16-01447" class="html-bibr">37</a>,<a href="#B38-remotesensing-16-01447" class="html-bibr">38</a>,<a href="#B39-remotesensing-16-01447" class="html-bibr">39</a>]. All vertical lines indicate wavelengths of the 77 selected bands. Orange solid lines correspond to the further selected 33 bands. Detailed band wavelengths can be found in <a href="#remotesensing-16-01447-t0A2" class="html-table">Table A2</a>.</p>
Full article ">Figure 3
<p>Mean spectra of (<b>a</b>) original and (<b>b</b>) brightness-normalized reflectance data, as well as the (<b>c</b>) coefficients of variation, for all crop species. A site-specific plot with labeled standard deviations can be found in <a href="#remotesensing-16-01447-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure 4
<p>Euclidean distances between the centroids of the first 16 principal component bands of each crop were projected to a 2-dimensional space using multidimensional scaling. Both (<b>a</b>) original and (<b>b</b>) brightness-normalized reflectance data were examined. The distance between each pair can be found in <a href="#remotesensing-16-01447-t0A3" class="html-table">Table A3</a>.</p>
Full article ">Figure 5
<p>Natural color composite (Red: 650 nm; Green: 560 nm; Blue: 480 nm) map of the IA site with crop species map generated using full spectra as classification input. Pixels not meeting the requirements of the spectral filters were not plotted. Grasses between different crop fields were classified as the crop that was most similar to them in terms of reflectance (i.e., soybean). Longitudes and latitudes of northeast and southwest corners of the map area were labeled.</p>
Full article ">Figure 6
<p>Natural color composite (Red: 650 nm; Green: 560 nm; Blue: 480 nm) map of except from the IA site with crop species map generated using full spectra as classification input. Pixels not meeting the requirements of the spectral filters were not plotted. Longitudes and latitudes of northwest and southeast corners of the map area were labeled.</p>
Full article ">Figure A1
<p>Mean and standard deviation (gray fill) of original and brightness-normalized reflectance of crops, as well as the coefficients of variation, at Sites CA1 (<b>a</b>–<b>c</b>), CA2 and CA3 (<b>d</b>–<b>f</b>), FL (<b>g</b>–<b>i</b>) as well as IA and MO (<b>j</b>–<b>l</b>).</p>
Full article ">
17 pages, 10714 KiB  
Article
Characterization of River Width Measurement Capability by Space Borne GNSS-Reflectometry
by April Warnock, Christopher S. Ruf and Arie L. Knoll
Remote Sens. 2024, 16(8), 1446; https://doi.org/10.3390/rs16081446 - 19 Apr 2024
Cited by 2 | Viewed by 1220
Abstract
In recent years, Global Navigation Satellite System reflectometry (GNSS-R) has been explored as a methodology for inland water body characterization. However, thorough characterization of the sensitivity and behavior of the GNSS-R signal to inland water bodies is still needed to progress this area [...] Read more.
In recent years, Global Navigation Satellite System reflectometry (GNSS-R) has been explored as a methodology for inland water body characterization. However, thorough characterization of the sensitivity and behavior of the GNSS-R signal to inland water bodies is still needed to progress this area of research. In this paper, we characterize the uncertainty associated with Cyclone Global Navigation Satellite System (CYGNSS) measurements on the determination of river width. The characterization study uses simulated data from a forward model that accurately simulates CYGNSS observations of mixed water/land scenes. The accuracy of the forward model is demonstrated by comparisons to actual observations of known water body shapes made at particular measurement geometries. Simulated CYGNSS data are generated over a range of synthetic scenes modeling a straight river subreach, and the results are analyzed to determine a predictive relationship between the peak SNR measured over the river subreaches and the river widths. An uncertainty analysis conducted using this predictive relationship indicates that, for simplistic river scenes, the SNR over the river is predictive of the river width to within +/−5 m. The presence of clutter (surrounding water bodies) within ~500 m of a river causes perturbations in the SNR measured over the river, which can render the river width retrievals unreliable. The results of this study indicate that, for isolated, straight rivers, GNSS-R data are able to measure river widths as narrow as 160 m with ~3% error. Full article
(This article belongs to the Special Issue Modeling, Processing and Analysis of Microwave Remote Sensing Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>E2ES results compared to CYGNSS raw IF observations for three water bodies. The (<b>left</b>) column shows the processed CYGNSS raw IF SNR (solid blue lines) and the E2ES simulated SNR (dashed red lines). The (<b>right</b>) column shows the SP track overpass of each water body, colored by the CYGNSS SNR. (<b>Top row</b>): Overpass of the Rio Santiago in Peru on 17 March 2022; (<b>Middle row</b>): Overpass grazing Lake Ilopango in El Salvador on 8 October 2023; (<b>Bottom row</b>): Overpass of the Roosevelt River in Brazil on 26 March 2022.</p>
Full article ">Figure 2
<p>CYGNSS raw IF overpasses used as input for the synthetically generated river simulations. (<b>Top</b>): Overpass of Lake Ilopango in El Salvador on 22 August 2019; (<b>Middle</b>): Overpass of the reservoir near the UHE Rondon II hydroelectric power plant in Brazil on 7 July 2022; (<b>Bottom</b>): Overpass of the Marañón River in Peru on 24 October 2022. The (<b>left</b>) figures show comparisons of the simulated E2ES SNR (red dashed lines) and the observations (solid blue lines). (<b>Right</b>) figures show the SP track, colored by observed SNR, relative to the water masks used for each simulation (blue).</p>
Full article ">Figure 3
<p>Simulations of straight river overpasses where the SP track makes a perpendicular (angle = 90 degrees) approach angle relative to the river orientation, for Track 1 (<b>left column</b>), Track 2 (<b>middle column</b>), and Track 3 (<b>right column</b>). (<b>Top row</b>) shows the simulated SNR for river widths of 160 m (blue lines), 176 m (green lines), and 192 m (red lines), corresponding to the input masks shown in the (<b>bottom row</b>), where the river is shown as the blue line and SP tracks as shown as the black arrowed lines.</p>
Full article ">Figure 4
<p>SNR peak data points vs. river width (dots) and best fit regression (solid lines) for Track 1 (blue), Track 2 (green), and Track 3 (red) for the perpendicular SP track crossings of a straight, isolated river.</p>
Full article ">Figure 5
<p>Simulations of straight river overpasses where the SP track makes an oblique (angle = 45 degrees) approach angle relative to the river orientation, for Track 1 (<b>left column</b>), Track 2 (<b>middle column</b>), and Track 3 (<b>right column</b>). (<b>Top row</b>) shows the simulated SNR for river widths of 160 m (blue lines), 176 m (green lines), and 192 m (red lines), corresponding to the input masks shown in the (<b>bottom row</b>), where the river is shown as the blue line and SP tracks as shown as the black arrowed lines.</p>
Full article ">Figure 6
<p>SNR peak data points vs. river width (dots) and best fit regression (solid lines) for Track 1 (blue), Track 2 (green), and Track 3 (red) for oblique SP track crossings of a straight, isolated river.</p>
Full article ">Figure 7
<p>Absolute value of the SNR perturbation as a function of the distance of a 1000 m diameter circular lake from the center of the river for a perpendicular SP track river crossing. The dashed line indicates the standard deviation of noise in the SNR measurements.</p>
Full article ">Figure 8
<p>Same as <a href="#remotesensing-16-01446-f007" class="html-fig">Figure 7</a> for an oblique 45-degree SP track river crossing. The dashed line indicates the standard deviation of noise in the SNR measurements.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop