[go: up one dir, main page]

Next Issue
Volume 15, October-2
Previous Issue
Volume 15, September-2
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 15, Issue 19 (October-1 2023) – 252 articles

Cover Story (view full-size image): This paper compares NASA LVIS (2005) and RIEGL (2021) LiDAR systems in evaluating Tropical Dry Forest (TDF) evolution over 16 years using waveform metrics. Naturally collected LVIS data contrast with RIEGL's point-based data, which is transformed into simulated waveforms. Waveform analysis revealed a 2.8-meter average canopy height increase across successional stages. Metrics like relative height, centroid (Cx, Cy), and radius of gyration displayed consistent shifts, signifying not just height, but structural changes in TDF’s successional classification. This study underscores LiDAR's efficacy in forest assessment, highlighting rapid forest structure shifts. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 39874 KiB  
Article
Crop Water Productivity from Cloud-Based Landsat Helps Assess California’s Water Savings
by Daniel Foley, Prasad Thenkabail, Adam Oliphant, Itiya Aneece and Pardhasaradhi Teluguntla
Remote Sens. 2023, 15(19), 4894; https://doi.org/10.3390/rs15194894 - 9 Oct 2023
Cited by 5 | Viewed by 2902
Abstract
Demand for food and water are increasing while the extent of arable land and accessible fresh water are decreasing. This poses global challenges as economies continue to develop and the population grows. With agriculture as the leading consumer of water, better understanding how [...] Read more.
Demand for food and water are increasing while the extent of arable land and accessible fresh water are decreasing. This poses global challenges as economies continue to develop and the population grows. With agriculture as the leading consumer of water, better understanding how water is used to produce food may help support the increase of Crop Water Productivity (CWP; kg/m3), the ratio of crop output per unit of water input (or crop per drop). Previous large-scale CWP studies have been useful for broad water use modeling at coarser resolutions. However, obtaining more precise CWP, especially for specific crop types in a particular area and growing season as outlined here are important for informing farm-scale water management decision making. Therefore, this study focused on California’s Central Valley utilizing high-spatial resolution satellite imagery of 30 m (0.09 hectares per pixel) to generate more precise CWP for commonly grown and water-intensive irrigated crops. First, two products were modeled and mapped. 1. Landsat based Actual Evapotranspiration (ETa; mm/d) to determine Crop Water Use (CWU; m3/m2), and 2. Crop Productivity (CP; kg/m2) to estimate crop yield per growing season. Then, CWP was calculated by dividing CP by CWU and mapped. The amount of water that can be saved by increasing CWP of each crop was further calculated. For example, in the 434 million m2 study area, a 10% increase in CWP across the 9 crops analyzed had a potential water savings of 31.5 million m3 of water. An increase in CWP is widely considered the best approach for saving maximum quantities of water. This paper proposed, developed, and implemented a workflow of combined methods utilizing cloud computing based remote sensing data. The environmental implications of this work in assessing water savings for food and water security in the 21st century are expected to be significant. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Select crop type Cropland Data Layer (CDL) map. Study area map displaying almonds, cotton, winter wheat, pistachios, grapes, barley, rice, corn, and walnuts within the Central Valley of California centered around Firebaugh, California for 2016. Data inset map displays Fresno County highlighted.</p>
Full article ">Figure 2
<p>Crop Water Productivity (CWP) methodology flowchart. Order of operations utilized in this CWP analysis. Crop Yield, Crop Area, and Crop Productivity were based on a single annual value per crop type. ET<sub>f</sub>, ET<sub>o</sub>, ET<sub>a</sub> were derived as daily values throughout the year. ET<sub>f</sub> and ET<sub>a</sub> values were calculated per pixel from Landsat 8 images. ET<sub>o</sub> was acquired from CIMIS data corresponding to Landsat 8 image overpass dates. CWP, CWU, and Economic CWP values were derived from these previous datasets. ET<sub>a</sub> = Actual Evapotranspiration, ET<sub>o</sub> = Reference Evapotranspiration, ET<sub>f</sub> = Evaporative Fraction, CIMIS = California Irrigation Management Information System [<a href="#B21-remotesensing-15-04894" class="html-bibr">21</a>], CDL = Cropland Data Layer [<a href="#B28-remotesensing-15-04894" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Actual Evapotranspiration (ET<sub>a</sub>) map production process. Per pixel ET<sub>a</sub> (mm/d) maps at 30 m resolution produced from image processing by multiplying Evaporative Fraction (ET<sub>f</sub>) (unitless) by Reference Evapotranspiration (ET<sub>o</sub>) (mm/d).</p>
Full article ">Figure 4
<p>Actual Evapotranspiration (ET<sub>a</sub>) study area map. Example of per pixel ET<sub>a</sub> (mm/d) map from a Landsat 8 image [<a href="#B33-remotesensing-15-04894" class="html-bibr">33</a>] for 7 August 2016. Higher ET<sub>a</sub> displayed in red and lower ET<sub>a</sub> in blue at 30 m resolution.</p>
Full article ">Figure 5
<p>Comparison of calculated Actual Evapotranspiration (Foley ET<sub>a</sub>) with OpenET. Calculated ET<sub>a</sub> from this study referred to as Foley ET<sub>a</sub> (mm/d) plotted versus OpenET (mm/d) [<a href="#B47-remotesensing-15-04894" class="html-bibr">47</a>] for nine commonly grown and water intensive crops in the Central Valley of California field site for June 2016. Note that monthly OpenET in total mm has been disaggregated to mm/d by dividing by days in the month. The linear regression equation is Foley ET<sub>a</sub> = 1.04*OpenET + 0.293 with R<sup>2</sup> = 0.89.</p>
Full article ">Figure 6
<p>Crop Water Productivity (CWP) map of the study area. Average CWP (kg/m<sup>3</sup>) for nine commonly grown and water intensive crops in the Central Valley of California in 2016 by crop type. Higher CWP displayed in red and lower CWP displayed in blue.</p>
Full article ">Figure 7
<p>Plot of CWP, ECWP, and water savings potential at 10% CWP increase per area. CWP (kg/m<sup>3</sup>), ECWP (<span>$</span>/m<sup>3</sup>) [<a href="#B116-remotesensing-15-04894" class="html-bibr">116</a>], and the ratio of water savings potential at 10% CWP increase per crop area (m<sup>3</sup>/m<sup>2</sup>) plotted for nine commonly grown and water intensive crops in 2016 in the Central Valley of California study area. CWP = Crop Water Productivity, ECWP = Economic Crop Water Productivity.</p>
Full article ">
17 pages, 3848 KiB  
Article
A Multi-Objective Geoacoustic Inversion of Modal-Dispersion and Waveform Envelope Data Based on Wasserstein Metric
by Jiaqi Ding, Xiaofeng Zhao, Pinglv Yang and Yapeng Fu
Remote Sens. 2023, 15(19), 4893; https://doi.org/10.3390/rs15194893 - 9 Oct 2023
Viewed by 1367
Abstract
The inversion of acoustic field data to estimate geoacoustic parameters has been a prominent research focus in the field of underwater acoustics for several decades. Modal-dispersion curves have been used to inverse seabed sound speed and density profiles, but such techniques do not [...] Read more.
The inversion of acoustic field data to estimate geoacoustic parameters has been a prominent research focus in the field of underwater acoustics for several decades. Modal-dispersion curves have been used to inverse seabed sound speed and density profiles, but such techniques do not account for attenuation inversion. In this study, a new approach where modal-dispersion and waveform envelope data are simultaneously inversed under a multi-objective framework is proposed. The inversion is performed using the Multi-Objective Bayesian Optimization (MOBO) method. The posterior probability densities (PPD) of the estimation results are obtained by resampling from the exploited state space using the Gibbs Sampler. In this study, the implemented MOBO approach is compared with individual inversions both from modal-dispersion curves and the waveform data. In addition, the effective use of the Wasserstein metric from optimal transport theory is explored. Then the MOBO performance is tested against two different cost functions based on the L2 norm and the Wasserstein metric, respectively. Numerical experiments are employed to evaluate the effect of different cost functions on inversion performance. It is found that the MOBO approach may have more profound advantages when applied to Wasserstein metrics. Results obtained from our study reveal that the MOBO approach exhibits reduced uncertainty in the inverse results when compared to individual inversion methods, such as modal-dispersion inversion or waveform inversion. However, it is important to note that this enhanced uncertainty reduction comes at the cost of sacrificing accuracy in certain parameters other than the sediment sound speed and attenuation. Full article
(This article belongs to the Special Issue Recent Advances in Underwater and Terrestrial Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Simulated geoacoustic model and sound speed profile obtained from SW06 experiment. The source depth is 22 m, and a single receiver is placed 7.0 km away from the source at depth 55 m. Note that the graph is not to scale.</p>
Full article ">Figure 2
<p>(Color online) Numerical signals. (<b>a</b>,<b>b</b>) show the received and corresponding warped signals, respectively. (<b>c</b>) The signal after warping (the continuous curves) and unwarping (the red circles). Keep in mind that the time axes of the two subfigures are not equivalent.</p>
Full article ">Figure 3
<p>(Color online) (<b>a</b>) Spectrogram of the warped signal. The white polygon shows the region of the TF mask applied to filter mode 3. (<b>b</b>) The frequency spectrum of the warped signal for the frequency band 0–50 Hz, and the embedded graph is warped mode 3. (<b>c</b>) Spectrogram of the received signal. White dotted lines represent the theoretical DCs for the first four modes. The estimated DCs are also displayed as red curves upon the time-frequency spectrogram (panel (<b>c</b>)).</p>
Full article ">Figure 4
<p>(Color online) Data fit achieved in inversions of DCs. Comparison between the observed modal-dispersion data (red solid lines) and the posteriori ensembles (blue dashed lines) obtained from DCs-BO inversion. The green dotted line represents the best model and the gray region is a 95% credible interval from the Bayesian sampling.</p>
Full article ">Figure 5
<p>(Color online) (<b>a</b>) The upper envelope (red line) and lower envelope (orange line) of the observed data. (<b>b</b>) Data fits achieved in FWH inversion. Comparison between the observed FWH data (blue solid lines) and the posteriori ensembles (purple dashed lines) obtained from FWH-BO inversion. The green solid line represents the best model and the gray region is the 95% credible interval from the Bayesian sampling.</p>
Full article ">Figure 6
<p>The posterior distribution of likelihood function in MOBO inversions, with black solid points representing the true values. (<b>a</b>,<b>b</b>) Likelihood distributions of DCs based on (<b>a</b>) L2 term and (<b>c</b>) Wasserstein metric. (<b>c</b>,<b>d</b>) Likelihood distributions of FWH based on (<b>c</b>) L2 term and (<b>d</b>) Wasserstein metric.</p>
Full article ">Figure 7
<p>Posteriori models based on L2-MOBO and corresponding data fits for (<b>b</b>) FWH and (<b>c</b>) DCs, along with marginal distributions of (<b>d</b>) sediment thickness, (<b>e</b>) sediment sound speed, (<b>f</b>) sediment density, (<b>g</b>) basement density, (<b>h</b>) sediment attenuation and (<b>i</b>) basement attenuation. (<b>a</b>) Models of sound velocity and sediment depth. The results are represented with different colors, indicating the probability of the models, from the best 20%, 50% (medium) and all. Black vertical lines mark the actual sound speed values in the sediment and basement. The horizontal dashed line and the patch show the true value and distribution interval of sediment depth, respectively. The black solid lines, red solid lines and blue dashed lines in (<b>b</b>,<b>c</b>) indicate the average inversion values, observed values, and misfits under the posterior, respectively. The grouping of the best and full models and the colors are consistent with (<b>a</b>). The grey uncertainty intervals in (<b>b</b>,<b>c</b>) represent the range of values that contains the mean of the data, with a 95% credible interval. Black vertical dashed lines in (<b>d</b>–<b>i</b>) represent the true value of parameters.</p>
Full article ">Figure 8
<p>Posteriori models based on Wasserstein-MOBO and corresponding data fits for (<b>b</b>) FWH and (<b>c</b>) DCs, along with marginal distributions of (<b>d</b>) sediment thickness, (<b>e</b>) sediment sound speed, (<b>f</b>) sediment density, (<b>g</b>) basement density, (<b>h</b>) sediment attenuation and (i) basement attenuation. (<b>a</b>) Models of sound velocity and sediment depth. The results are represented with different colors, indicating the probability of the models, from the best 20%, 50% (medium) and all. Black vertical lines mark the actual sound speed values in the sediment and basement. The horizontal dashed line and the patch show the true value and distribution interval of sediment depth, respectively. The black solid lines, red solid lines and blue dashed lines in (<b>b</b>,<b>c</b>) indicate the average inversion values, observed values, and misfits under the posterior, respectively. The grouping of the best and full models and the colors are consistent with (<b>a</b>). The grey uncertainty intervals in (<b>b</b>,<b>c</b>) represent the range of values that contains the mean of the data, with a 95% credible interval. Black vertical dashed lines in (<b>d</b>–<b>i</b>) represent the true value of parameters.</p>
Full article ">
19 pages, 13458 KiB  
Article
Dim Moving Multi-Target Enhancement with Strong Robustness for False Enhancement
by Yuke Zhang, Xin Chen, Peng Rao and Liangjie Jia
Remote Sens. 2023, 15(19), 4892; https://doi.org/10.3390/rs15194892 - 9 Oct 2023
Cited by 2 | Viewed by 1178
Abstract
In a space-based infrared system, the enhancement of targets plays a crucial role in improving target detection capabilities. However, the existing methods of target enhancement face challenges when dealing with multi-target scenarios and a low signal-to-noise ratio (SNR). Furthermore, false enhancement poses a [...] Read more.
In a space-based infrared system, the enhancement of targets plays a crucial role in improving target detection capabilities. However, the existing methods of target enhancement face challenges when dealing with multi-target scenarios and a low signal-to-noise ratio (SNR). Furthermore, false enhancement poses a serious problem that affects the overall performance. To address these issues, a new enhancement method for a dim moving multi-target with strong robustness for false enhancement is proposed in this paper. Firstly, multi-target localization is applied by spatial–temporal filtering and connected component detection. Then, the motion vectors of each target are obtained using optical flow detection. Finally, the consecutive images are convoluted in 3D based on the convolution kernel guided by the motion vectors of the target. This process allows for the accumulation of the target energy. The experimental results demonstrate that this algorithm can adaptively enhance a multi-target and notably improve the SNR under low SNR conditions. Moreover, it exhibits outstanding performance compared to other algorithms and possesses strong robustness against false enhancement. Full article
Show Figures

Figure 1

Figure 1
<p>Different forms of the target.</p>
Full article ">Figure 2
<p>The long-wave infrared image with 10 different simulated targets.</p>
Full article ">Figure 3
<p>The flow chart of the proposed algorithm.</p>
Full article ">Figure 4
<p>The process of multi-target judgement and rough localization.</p>
Full article ">Figure 5
<p>The diagram of 3D convolution.</p>
Full article ">Figure 6
<p>The enlarged view of targets and the 3D display of them in original and enhanced images.</p>
Full article ">Figure 7
<p>The results of different evaluation metrics with different <math display="inline"><semantics> <mi>δ</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>The enhancement results of different types of images.</p>
Full article ">Figure 9
<p>The enhancement results of the multi-targets. (<b>A</b>) shows the original targets. (<b>B</b>) shows the 3D display of them. (<b>C</b>) shows the enhanced targets, and (<b>D</b>) shows the corresponding 3D displays of them, and the red oval represents the target.</p>
Full article ">Figure 10
<p>Failure result of 3DCFSI affected by blind pixels.</p>
Full article ">Figure 11
<p>Visual comparison between typical methods and proposed algorithm. Squares represent the targets. (<b>a</b>) The raw images of Seq.1-Seq.4, respectively. (<b>b</b>–<b>e</b>) are the enhanced results of improved top-hat algorithm, DPA-based energy accumulation, 3DCFSI-based method, ECA method and proposed algorithm, respectively.</p>
Full article ">Figure 12
<p>The enhancement results of images with stars.</p>
Full article ">
7 pages, 11937 KiB  
Communication
Preliminary Investigation of Sudden Ground Subsidence and Building Tilt in Balitai Town, Tianjin City, on 31 May 2023
by Haonan Jiang, Timo Balz, Jianan Li and Vishal Mishra
Remote Sens. 2023, 15(19), 4891; https://doi.org/10.3390/rs15194891 - 9 Oct 2023
Cited by 5 | Viewed by 1551
Abstract
A short-term rapid subsidence event occurred in the Bi Guiyuan community in Balitai Town, Tianjin City, leading to the tilting of high-rise buildings and the emergency evacuation of over 3000 residents. In response to this incident, InSAR (Interferometric Synthetic Aperture Radar) technology was [...] Read more.
A short-term rapid subsidence event occurred in the Bi Guiyuan community in Balitai Town, Tianjin City, leading to the tilting of high-rise buildings and the emergency evacuation of over 3000 residents. In response to this incident, InSAR (Interferometric Synthetic Aperture Radar) technology was swiftly employed to monitor the subsidence in the area before and after the event. Our observations indicate that the region had maintained stability for 8 months prior to the incident. However, over the course of the 15-day event, the ground experienced more than 10mm of subsidence. By integrating the findings from an InSAR analysis with geological studies, we speculate that the rapid subsidence in the region is related to the extraction of geothermal resources. It is suspected that during drilling operations, the wellbore mistakenly penetrated a massive underground karst cavity. Consequently, this resulted in a sudden rapid leakage of drilling fluid, creating a pressure differential that caused the overlying soil layers to collapse and rapidly sink into the cavity. As a result, short-term rapid subsidence on the ground surface and tilting of high-rise buildings occurred. Full article
(This article belongs to the Special Issue New Developments in Remote Sensing for the Environment II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Research Area map. Map of Tianjin is shown in (<b>a</b>) and the red dot in Jinnan District represents the specific location of the event. The Red polygon in (<b>b</b>) is the boundary of the district where the sudden subsidence occurred, and the green rectangle is the boundary of Geothermal Field No. 52. Field investigation is depicted in (<b>c</b>), showing the damage caused by the event. Photograph credits: China Central Television (CCTV).</p>
Full article ">Figure 2
<p>Graph of the temporal network used for InSAR time-series analysis. Red line marks the time of the sudden ground subsidence event.</p>
Full article ">Figure 3
<p>Cumulative subsidence map of research area. The three different colored triangles in (<b>a</b>) are the three reference points extracted, and their time series are represented by the same color in (<b>b</b>). The blue shaded area in (<b>b</b>) is noted as the event period.</p>
Full article ">Figure 4
<p>A simplified Geological profile of Tianjin and and schematic diagram of wellbore leakage process.</p>
Full article ">
16 pages, 3519 KiB  
Article
Validating and Developing Hyperspectral Indices for Tracing Leaf Chlorophyll Fluorescence Parameters under Varying Light Conditions
by Jie Zhuang, Quan Wang, Guangman Song and Jia Jin
Remote Sens. 2023, 15(19), 4890; https://doi.org/10.3390/rs15194890 - 9 Oct 2023
Cited by 11 | Viewed by 2343
Abstract
Chlorophyll a fluorescence (ChlFa) parameters provide insight into the physiological and biochemical processes of plants and have been widely applied to monitor and evaluate the photochemical process and photosynthetic capacity of plants in a variety of environments. Recent advances in remote sensing provide [...] Read more.
Chlorophyll a fluorescence (ChlFa) parameters provide insight into the physiological and biochemical processes of plants and have been widely applied to monitor and evaluate the photochemical process and photosynthetic capacity of plants in a variety of environments. Recent advances in remote sensing provide new opportunities for the detection of ChlFa at large scales but demand further tremendous efforts. Among such efforts, application of the hyperspectral index is always possible, but the performance of hyperspectral indices in detecting ChlFa parameters under varying light conditions is much less investigated. The objective of this study is to investigate the performance of reported hyperspectral indices for tracking ChlFa parameters under different light conditions and to develop and evaluate novel spectral indices. Therefore, an experiment was conducted to simultaneously measure ChlFa parameters and spectral reflectance of sunlit and shaded leaves under varying light conditions, and 28 reported hyperspectral indices were examined for their performance in tracking the ChlFa parameters. Furthermore, we developed novel hyperspectral indices based on various spectral transformations. The results indicated that the maximum quantum efficiency of photosystem II (PSIImax), the cumulative quantum yield of photochemistry (ΦP), and the fraction of open reaction centers in photosystem II (qL) of sunlit leaves were significantly higher than those of shaded leaves, while the cumulative quantum yield of regulated thermal dissipation (ΦN) and fluorescence (ΦF) of shaded leaves was higher than that of sunlit leaves. Efficient tracing of ChlFa parameters could not be achieved from previously published spectral indices. In comparison, all ChlFa parameters were well quantified in shaded leaves when using novel hyperspectral indices, although the hyperspectral indices for tracing the non-photochemical quenching (NPQ) and ΦF were not stable, especially for sunlit leaves. Our findings justify the use of hyperspectral indices as a practical approach to estimating ChlFa parameters. However, caution should be used when using spectral indices to track ChlFa parameters based on the differences in sunlit and shaded leaves. Full article
Show Figures

Figure 1

Figure 1
<p>The possible fate of light energy absorbed by the leaf.</p>
Full article ">Figure 2
<p>Different spectral reflectance transformations from original (OR) (<b>a</b>), first-order (first) (<b>b</b>), logarithm (Log) (<b>c</b>), standard normal variate transformation (SNV) (<b>d</b>), multiplicative scatter correction (MSC) (<b>e</b>), and extended multiplicative scatter correction (EMSC) (<b>f</b>) for all leaves and sunlit and shaded leaves; color coding is used for different leaf groups.</p>
Full article ">Figure 3
<p>Comparing the maximum quantum efficiency of photosystem II (PSII<sub>max</sub>) (<b>a</b>), non-photochemical quenching (NPQ) (<b>b</b>), the fraction of open reaction centers in photosystem II (qL) (<b>c</b>), the cumulative quantum yield of photochemistry (ΦP) (<b>d</b>), regulated thermal dissipation (ΦN) (<b>e</b>), and fluorescence (ΦF) (<b>f</b>) for all leaves and sunlit and shaded leaves; in the boxplot, the black lines and white diamonds are the median lines and mean points, respectively; number and n represent the mean value and the sample size in each group; color coding is used for different leaf groups; asterisks represent significant differences of <span class="html-italic">t</span>-test (NS. <span class="html-italic">p</span> &gt; 0.05, * <span class="html-italic">p</span> ≤ 0.05, ** <span class="html-italic">p</span> ≤ 0.01, *** <span class="html-italic">p</span> ≤ 0.001). Descriptive statistics of <span class="html-italic">t</span>-test can be viewed in <a href="#app1-remotesensing-15-04890" class="html-app">Table S3</a>.</p>
Full article ">Figure 4
<p>Performance of published indices for estimating ChlFa parameters among all leaves (green) and sunlit (red) and shaded leaves (blue); RPD is the ratio of performance to deviation; the red box is the best index based on RPD evaluation; all RPD values are presented in <a href="#app1-remotesensing-15-04890" class="html-app">Tables S4–S6</a>.</p>
Full article ">Figure 5
<p>Performance of different indices for the estimation of the maximum quantum efficiency of photosystem II (PSII<sub>max</sub>) (<b>a</b>), non-photochemical quenching (NPQ) (<b>b</b>), the fraction of open reaction centers in photosystem II (qL) (<b>c</b>), the cumulative quantum yield of photochemistry (ΦP) (<b>d</b>), regulated thermal dissipation (ΦN) (<b>e</b>), and fluorescence (ΦF) (<b>f</b>) applying various spectral transformations among all leaves, sunlit leaves, and shaded leaves; color coding is used for different leaf groups, and shape coding is used for different index types; RPD is the ratio of performance to deviation; the label is the best index based on RPD evaluation for different leaf groups. The wavelength information of the determined new spectral indices can be viewed in <a href="#app1-remotesensing-15-04890" class="html-app">Table S7</a>.</p>
Full article ">Figure 6
<p>Measurements and predictions of the maximum quantum efficiency of photosystem II (PSIImax) (<b>a</b>), non-photochemical quenching (NPQ) (<b>b</b>), the fraction of open reaction centers in photosystem II (qL) (<b>c</b>), the cumulative quantum yield of photochemistry (ΦP) (<b>d</b>), regulated thermal dissipation (ΦN) (<b>e</b>), and fluorescence (ΦF) (<b>f</b>) using newly developed indices in all leaves, sunlit leaves, and shaded leaves; color coding is used for different leaf groups; the black dashed line represents the 1:1 line; R<sup>2</sup> is the coefficient of determination, RMSE is the root mean square error, and RPD is the ratio of performance to deviation.</p>
Full article ">
23 pages, 6357 KiB  
Article
Performance of the Atmospheric Radiative Transfer Simulator (ARTS) in the 600–1650 cm−1 Region
by Zichun Jin, Zhiyong Long, Shaofei Wang and Yunmeng Liu
Remote Sens. 2023, 15(19), 4889; https://doi.org/10.3390/rs15194889 - 9 Oct 2023
Cited by 2 | Viewed by 1619
Abstract
The Atmospheric Radiative Transfer Simulator (ARTS) has been widely used in the radiation transfer simulation from microwave to terahertz. Due to the same physical principles, ARTS can also be used for simulations of thermal infrared (TIR). However, thorough evaluations of ARTS in the [...] Read more.
The Atmospheric Radiative Transfer Simulator (ARTS) has been widely used in the radiation transfer simulation from microwave to terahertz. Due to the same physical principles, ARTS can also be used for simulations of thermal infrared (TIR). However, thorough evaluations of ARTS in the TIR region are still lacking. Here, we evaluated the performance of ARTS in 600–1650 cm−1 taking the Line-By-Line Radiative Transfer Model (LBLRTM) as a reference model. Additionally, the moderate resolution atmospheric transmission (MODTRAN) band model (BM) and correlated-k (CK) methods were also used for comparison. The comparison results on the 0.001 cm−1 spectral grid showed a high agreement (sub-0.1 K) between ARTS and LBLRTM, while the mean bias difference (MBD) and root mean square difference (RMSD) were less than 0.05 K and 0.3 K, respectively. After convolving with the spectral response functions of the Atmospheric Infra-Red Sounder (AIRS) and the Moderate Resolution Imaging Spectroradiometer (MODIS), the brightness temperature (BT) differences between ARTS and LBLRTM became smaller with RMSDs of <0.1 K. The comparison results for Jacobians showed that the Jacobians calculated by ARTS and LBLRTM were close for temperature (can be used for Numerical Weather Prediction application) and O3 (excellent Jacobian fit). For the water vapor Jacobian, the Jacobian difference increased with an increasing water vapor content. However, at extremely low water vapor values (0.016 ppmv in this study), LBLRTM exhibited non-physical mutations, while ARTS was smooth. This study aims to help users understand the simulation accuracy of ARTS in the TIR region and the improvement of ARTS via the community. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The distribution of (<b>a</b>) atmospheric temperature and (<b>b</b>–<b>g</b>) six gases of the 83 profiles as a function of altitude.</p>
Full article ">Figure 2
<p>The statistics of the differences between ARTS and LBLRTM for all 83 profiles at 0° (the red violin), 30° (the blue violin), and 60° (the green violin). The white dot represents the median, and the two black dots represent the upper and lower quartiles. The two colored dots represent the 5th and 95th percentiles.</p>
Full article ">Figure 3
<p>The differences between ARTS and LBLRTM for (<b>a</b>) all gases, (<b>b</b>) CO<sub>2</sub> (absorption lines and continuum), (<b>c</b>) O<sub>3</sub>, and (<b>d</b>) H<sub>2</sub>O (absorption lines and continuum) for profile 83. The color represents the points number in the area: the redder the color, the larger the number, and the bluer, the smaller.</p>
Full article ">Figure 4
<p>As in <a href="#remotesensing-15-04889-f003" class="html-fig">Figure 3</a> for subregion III.</p>
Full article ">Figure 5
<p>BT differences between (<b>a</b>–<b>c</b>) ARTS, (<b>d</b>–<b>f</b>) MODTRAN-BM, and (<b>g</b>–<b>i</b>) MODTRAN-CK and LBLRTM on AIRS channels. The values outside/inside brackets represent the MBD/RMSD of each subregion, and the unit is K.</p>
Full article ">Figure 6
<p>The atmospheric profiles corresponding to RMSDs at the 0/25/50/75/100-th percentile for (<b>a</b>) Subregion I, (<b>b</b>) Subregion II, (<b>c</b>) Subregion III, (<b>d</b>) Subregion IV, (<b>e</b>) Subregion V, (<b>f</b>) Subregion VI. The 0-th and 100-th percentile represents the smallest and biggest RMSD, respectively.</p>
Full article ">Figure 7
<p>The goodness-of-fit measure <span class="html-italic">M</span> for (<b>a</b>) temperature Jacobian in subregion I, (<b>b</b>) ozone Jacobian in subregion III, (<b>c</b>) water vapor Jacobian in subregion V, and (<b>d</b>–<b>f</b>) channel Jacobians corresponding to the maximum <span class="html-italic">M</span> value in the three subregions. For (<b>d</b>–<b>f</b>), the solid and dashed lines represent the channel Jacobians of LBLRTM and ARTS, respectively.</p>
Full article ">Figure 8
<p>BT differences between (<b>a</b>) ARTS, (<b>b</b>) MODTRAN-BM, and (<b>c</b>) MODTRAN-CK and LBLRTM on MODIS bands. The bottom and top axes represent MODIS band numbers and corresponding wavenumbers, respectively.</p>
Full article ">Figure 9
<p>The differences between ARTS and LBLRTM (with/no line mixing) for profile (<b>a</b>,<b>b</b>) 81, (<b>c</b>,<b>d</b>) 82, and (<b>e</b>,<b>f</b>) 83 at 0° in subregion I.</p>
Full article ">Figure A1
<p>BT consistency between ARTS and LBLRTM as a function of wavenumber: (<b>a</b>) BTs simulated by LBLRTM for profile 83; (<b>b</b>) MBDs and (<b>c</b>) RMSDs of the BT differences for all 83 profiles.</p>
Full article ">Figure A2
<p>BT consistency (profile 83 and 0°) between ARTS and LBLRTM as a function of wavenumber for the five single gases, i.e., (<b>a</b>,<b>b</b>) CO<sub>2</sub>, (<b>c</b>,<b>d</b>) H<sub>2</sub>O, (<b>e</b>) O<sub>3</sub>, (<b>f</b>) CH<sub>4</sub>, and (<b>g</b>) N<sub>2</sub>O. (<b>I</b>–<b>VI</b>) represent different subregions.</p>
Full article ">
19 pages, 8386 KiB  
Article
The Doppler Characteristics of Sea Echoes Acquired by Motion Radar
by Pengbo Du, Yunhua Wang, Xin Li, Jianbo Cui, Yanmin Zhang, Qian Li and Yushi Zhang
Remote Sens. 2023, 15(19), 4888; https://doi.org/10.3390/rs15194888 - 9 Oct 2023
Viewed by 1443
Abstract
The Doppler characteristics of sea surface echoes reflect the time-varying characteristics of the sea surface and can be used to retrieve ocean dynamic parameters and detect targets. On airborne, spaceborne and shipborne radar platforms, radar moves along with the platforms while illuminating the [...] Read more.
The Doppler characteristics of sea surface echoes reflect the time-varying characteristics of the sea surface and can be used to retrieve ocean dynamic parameters and detect targets. On airborne, spaceborne and shipborne radar platforms, radar moves along with the platforms while illuminating the sea surface. In this case, the area of the sea surface illuminated by radar beam changes rapidly with the motion, and the coherence of the backscattered echoes at different times decreases significantly. Therefore, the Doppler characteristics of the echoes would also be affected by the radar motion. At present, the computational requirements needed to simulate the Doppler spectrum of the microwave scattering field from the sea surface based on numerical methods are huge. To overcome this problem, a new method based on the sub-scattering surface elements has been proposed to simulate the Doppler spectrum of sea echoes acquired by a moving microwave radar. A comparison with the results evaluated by the SSA demonstrate the availability and superiority of the new method proposed by us. The influences induced by radar motion, radar beamwidth, incident angle, and thermal noise on the Doppler characteristics are all considered in this new method. The simulated results demonstrate that the spectrum bandwidth of sea surface echoes acquired by radar on the dive staring motion platform becomes somewhat narrower. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The changes in the radar footprint with time: (<b>a</b>) the dive staring motion radar and (<b>b</b>) the horizontal motion radar. (The gray area denotes the footprint at the previous time, and the yellow area is the footprint at the later time. The red and greens lines represent the radar range gates at the previous and the later time, respectively).</p>
Full article ">Figure 2
<p>Sea surface intercepted by radar footprint (The black ellipse represents radar footprint).</p>
Full article ">Figure 3
<p>The change in the incident angle of facets for different platform speeds: (<b>a</b>) 100 m/s, (<b>b</b>) 200 m/s, (<b>c</b>) 500 m/s, (<b>d</b>) 1000 m/s.</p>
Full article ">Figure 4
<p>Change in correlation coefficient of registration with platform speed: (<b>a</b>) dive staring motion radar, (<b>b</b>) horizontal motion radar.</p>
Full article ">Figure 5
<p>The normalized Doppler spectrum calculated by SSA and our method with different polarizations in the L-band acquired on a horizontal motion radar platform: (<b>a</b>–<b>c</b>) 0 m/s, (<b>d</b>–<b>f</b>) 50 m/s, (<b>g</b>–<b>i</b>) 200 m/s.</p>
Full article ">Figure 5 Cont.
<p>The normalized Doppler spectrum calculated by SSA and our method with different polarizations in the L-band acquired on a horizontal motion radar platform: (<b>a</b>–<b>c</b>) 0 m/s, (<b>d</b>–<b>f</b>) 50 m/s, (<b>g</b>–<b>i</b>) 200 m/s.</p>
Full article ">Figure 6
<p>The influence of platform speeds on Doppler spectrum characteristics of the scattering fields acquired by a dive staring motion radar with different polarizations: (<b>a</b>–<b>c</b>) 0 m/s, (<b>d</b>–<b>f</b>) 100 m/s, (<b>g</b>–<b>i</b>) 500 m/s.</p>
Full article ">Figure 7
<p>The Doppler characteristics of sea surface echo at different initial heights: (<b>a</b>–<b>c</b>) 3000 m, (<b>d</b>–<b>f</b>) 5000 m.</p>
Full article ">Figure 8
<p>The influence of beam width on short-time Doppler spectrum characteristics at different polarizations: (<b>a</b>–<b>c</b>) the beam width is 1°, (<b>d</b>–<b>f</b>) the beam width is 3°.</p>
Full article ">Figure 9
<p>The Doppler characteristics of sea echoes at different incident angles: (<b>a</b>–<b>c</b>) 30°, (<b>d</b>–<b>f</b>) 50°.</p>
Full article ">Figure 10
<p>The Doppler characteristics of sea surface echo at different radar frequencies: (<b>a</b>–<b>c</b>) C-band, (<b>d</b>–<b>f</b>) X-band, (<b>g</b>–<b>i</b>) Ku-band.</p>
Full article ">Figure 10 Cont.
<p>The Doppler characteristics of sea surface echo at different radar frequencies: (<b>a</b>–<b>c</b>) C-band, (<b>d</b>–<b>f</b>) X-band, (<b>g</b>–<b>i</b>) Ku-band.</p>
Full article ">Figure 11
<p>The Doppler characteristics of sea surface echo at different bandwidths: (<b>a</b>–<b>c</b>) 100 MHz, (<b>d</b>–<b>f</b>) 50 MHz.</p>
Full article ">Figure 12
<p>The Doppler power spectrum at different wind speeds ranging from 5 m/s to 15 m/s: (<b>a</b>) VV-Pol, (<b>b</b>) HH-Pol, (<b>c</b>) VH-Pol.</p>
Full article ">Figure 13
<p>The Doppler power spectrum in different wind directions of different polarizations: (<b>a</b>) VV-Pol, (<b>b</b>) HH-Pol, (<b>c</b>) VH-Pol.</p>
Full article ">
23 pages, 2929 KiB  
Article
Misaligned RGB-Infrared Object Detection via Adaptive Dual-Discrepancy Calibration
by Mingzhou He, Qingbo Wu, King Ngi Ngan, Feng Jiang, Fanman Meng and Linfeng Xu
Remote Sens. 2023, 15(19), 4887; https://doi.org/10.3390/rs15194887 - 9 Oct 2023
Cited by 6 | Viewed by 3099
Abstract
Object detection based on RGB and infrared images has emerged as a crucial research area in computer vision, and the synergy of RGB-Infrared ensures the robustness of object-detection algorithms under varying lighting conditions. However, the RGB-IR image pairs captured typically exhibit spatial misalignment [...] Read more.
Object detection based on RGB and infrared images has emerged as a crucial research area in computer vision, and the synergy of RGB-Infrared ensures the robustness of object-detection algorithms under varying lighting conditions. However, the RGB-IR image pairs captured typically exhibit spatial misalignment due to sensor discrepancies, leading to compromised localization performance. Furthermore, since the inconsistent distribution of deep features from the two modalities, directly fusing multi-modal features will weaken the feature difference between the object and the background, therefore interfering with the RGB-Infrared object-detection performance. To address these issues, we propose an adaptive dual-discrepancy calibration network (ADCNet) for misaligned RGB-Infrared object detection, including spatial discrepancy and domain-discrepancy calibration. Specifically, the spatial discrepancy calibration module conducts an adaptive affine transformation to achieve spatial alignment of features. Then, the domain-discrepancy calibration module separately aligns object and background features from different modalities, making the distribution of the object and background of the fusion feature easier to distinguish, therefore enhancing the effectiveness of RGB-Infrared object detection. Our ADCNet outperforms the baseline by 3.3% and 2.5% in mAP50 on the FLIR and misaligned M3FD datasets, respectively. Experimental results demonstrate the superiorities of our proposed method over the state-of-the-art approaches. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of spatial misalignment. (<b>a</b>) Low-quality bounding boxes caused by spatial misalignment. (<b>b</b>) The fusion image generated by the method TarDAL [<a href="#B22-remotesensing-15-04887" class="html-bibr">22</a>] will produce a ghost, disturbing the localization. (<b>c</b>) The proposed ADCNet method is intended to learn the spatial relationship between RGB and IR to achieve spatial discrepancy calibration at the feature level.</p>
Full article ">Figure 2
<p>Directly fusing RGB and IR deep features with domain discrepancies will result in overlapping distributions of the object and background, making it challenging for the detection head. Our method first performs domain-discrepancy calibration on multi-modal features and then conducts feature fusion.</p>
Full article ">Figure 3
<p>Comparison of RGB-Infrared fusion methods in different stages. (<b>a</b>) Early fusion. (<b>b</b>) Mid-fusion. (<b>c</b>) Late fusion.</p>
Full article ">Figure 4
<p><b>An overview of our adaptive dual-discrepancy calibration network (ADCNet)</b> for misaligned RGB-Infrared object detection. (1) Size adaption module. The main focus of our method is on the dual-discrepancy calibration, i.e., (2) spatial discrepancy calibration and (3) domain discrepancy calibration in the graph. Among them, the rotated picture highlights the issue of spatial misalignment, and the colormap of the feature only represents the domain discrepancy.</p>
Full article ">Figure 5
<p>The flowchart of Adaptive Instance Normalization (AdaIN) and fusion. The highlighted part of the cube on the right illustrates the dimensions for normalization.</p>
Full article ">Figure 6
<p>Confusion matrix of validation results on FLIR by different methods. (<b>a</b>) ADCNet (ours). (<b>b</b>) Baseline. (<b>c</b>) Pool and nms. (<b>d</b>) CFT [<a href="#B19-remotesensing-15-04887" class="html-bibr">19</a>]. (<b>e</b>) TarDAL [<a href="#B22-remotesensing-15-04887" class="html-bibr">22</a>]. (<b>f</b>) Only infrared.</p>
Full article ">Figure 7
<p>Qualitative comparison of object-detection results in (<b>a</b>–<b>c</b>) three scenarios on the M3FD dataset. The rows from top to bottom in the figure are the results of infrared detection, TarDAL, CFT, baseline, and our ADCNet, respectively. From scene (<b>a</b>,<b>b</b>), it can be found that our ADCNet has a more satisfactory detection effect for smaller objects. In scene (<b>c</b>), the results of our method have higher confidence.</p>
Full article ">Figure 8
<p>A demonstration of the effect of our spatial discrepancy calibration module. The RGB and infrared images in the figure are from the misaligned version of the M3FD dataset. We project the RGB deep features of the baseline and the deep features through our calibration onto the IR original image (the third and fourth rows in the figure). Rows 5 and 6 highlight the cropped regions shown by the red and green dashed frames in rows 3 and 4. From the visualization results, it can be seen that the features after the spatial discrepancy calibration module have a higher coincidence with the IR image.</p>
Full article ">Figure 9
<p>Visualization of the distance relationship between two modalities’ features before (<b>a</b>) and after (<b>b</b>) domain-discrepancy calibration module through the t-SNE algorithm. Each point represents the feature of crop regions of objects and backgrounds, of which the green and blue five-pointed stars are typical examples as shown in the figure. The RGB and IR features are aligned after our domain-discrepancy calibration. The rows from top to bottom represent the three adaptive dual-discrepancy calibrations of our network, which are, respectively, located after the <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>16</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>32</mn> </mrow> </semantics></math> downsampling of the backbone.</p>
Full article ">
21 pages, 11473 KiB  
Article
Retrievals of Chlorophyll-a from GOCI and GOCI-II Data in Optically Complex Lakes
by Yuyu Guo, Xiaoqi Wei, Zehui Huang, Hanhan Li, Ronghua Ma, Zhigang Cao, Ming Shen and Kun Xue
Remote Sens. 2023, 15(19), 4886; https://doi.org/10.3390/rs15194886 - 9 Oct 2023
Cited by 5 | Viewed by 1884
Abstract
The chlorophyll-a (Chla) concentration is a key parameter to evaluate the eutrophication conditions of water, which is very important for monitoring algal blooms. Although Geostationary Ocean Color Imager (GOCI) has been widely used in Chla inversion, the consistency of the [...] Read more.
The chlorophyll-a (Chla) concentration is a key parameter to evaluate the eutrophication conditions of water, which is very important for monitoring algal blooms. Although Geostationary Ocean Color Imager (GOCI) has been widely used in Chla inversion, the consistency of the Rayleigh-corrected reflectance (Rrc) of GOCI and GOCI-II sensors still needs to be further evaluated, and a model suitable for lakes with complex optical properties needs to be constructed. The results show that (1) the derived Chla values of the GOCI and GOCI-II synchronous data were relatively consistent and continuous in three lakes in China. (2) The accuracy of the random forest (RF) model (R2 = 0.84, root mean square error (RMSE) =11.77 μg/L) was higher than that of the empirical model (R2 = 0.79, RMSE = 12.63 μg/L) based on the alternative floating algae index (AFAI). (3) The interannual variation trend fluctuated, with high Chla levels in Lake Chaohu in 2015 and 2019, while those in Lake Hongze were high in 2013, 2015, and 2022, and those in Lake Taihu reached their peak in 2017 and 2019. There were three types of diurnal variation patterns, namely, near-continuous increase (Class 1), near-continuous decrease (Class 2), and first an increase and then a decrease (Class 3), among which Lake Chaohu and Lake Taihu occupied the highest proportion in Class 3. The results analyzed the temporal and spatial variations of Chla in three lakes for 12 years and provided support for the use of GOCI and GOCI-II data and monitoring of Chla in optical complex inland waters. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of Lake Chaohu, Lake Taihu, and Lake Hongze in China. The spatial distributions of samples in (<b>b</b>) Lake Hongze, (<b>c</b>) Lake Chaohu, and (<b>d</b>) Lake Taihu are shown.</p>
Full article ">Figure 2
<p>Spectral response functions of (<b>a</b>) GOCI and (<b>b</b>) GOCI-II in different spectral ranges. Note that the bands marked with * are new bands of GOCI-II.</p>
Full article ">Figure 3
<p>Number of images of Lake Chaohu, Lake Hongze, and Lake Taihu in different months.</p>
Full article ">Figure 4
<p>Consistent verification results for different bands based on GOCI and GOCI-II data.</p>
Full article ">Figure 5
<p>Training and validation of the inversion model for the dominant factor (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>), and (<b>m</b>) show the training results for the quadratic polynomial equation with five factors, namely, the AFAI, B7/B5, B7/B6, FLH, and SI, respectively; (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>), and (<b>n</b>) show the training results for the exponential equation; (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>), and (<b>o</b>) show the validation results for the exponential equation.</p>
Full article ">Figure 6
<p>Training and validation of the RF model on Chl<span class="html-italic">a</span> estimation: (<b>a</b>) importance of seven variables; (<b>b</b>) training; and (<b>c</b>) validation.</p>
Full article ">Figure 7
<p>Yearly mean Chl<span class="html-italic">a</span> from 2011 to 2022 in (<b>a</b>) Lake Chaohu, (<b>b</b>) Lake Hongze, and (<b>c</b>) Lake Taihu. Note that the GOCI data for 2011 started in May.</p>
Full article ">Figure 8
<p>Time series of the monthly mean Chl<span class="html-italic">a</span> of GOCI and GOCI-II from 2011 to 2022 in (<b>a</b>) Lake Chaohu, (<b>c</b>) Lake Hongze, and (<b>e</b>) Lake Taihu. Daily mean Chl<span class="html-italic">a</span> of GOCI and GOCI-II from January to March 2021 in (<b>b</b>) Lake Chaohu, (<b>d</b>) Lake Hongze, and (<b>f</b>) Lake Taihu.</p>
Full article ">Figure 9
<p>Monthly mean Chl<span class="html-italic">a</span> in (<b>a</b>) Lake Chaohu, (<b>b</b>) Lake Hongze, and (<b>c</b>) Lake Taihu.</p>
Full article ">Figure 10
<p>Three typical types of Chl<span class="html-italic">a</span> diurnal variation patterns in the three lakes: (<b>a</b>) shows images of Class 1 (near-continuous increase), (<b>b</b>) shows images of Class 2 (near-continuous decrease), (<b>c</b>) shows images of Class 3 (first an increase and then a decrease), and (<b>d</b>–<b>f</b>) show corresponding images for three dates. (<b>a1</b>–<b>f1</b>) show Lake Chaohu, (<b>a2</b>–<b>f2</b>) show Lake Hongze, and (<b>a3</b>–<b>f3</b>) show Lake Taihu.</p>
Full article ">Figure 11
<p>Error of the models with different numbers of samples: (<b>a</b>) R<sup>2</sup>, (<b>b</b>) RMSE, (<b>c</b>) MAPE, and (<b>d</b>) UPD.</p>
Full article ">Figure 12
<p>Location of the samples in (<b>a</b>) Lake Hongze and (<b>b</b>) validation of the models.</p>
Full article ">
23 pages, 10183 KiB  
Article
A Robust InSAR Phase Unwrapping Method via Improving the pix2pix Network
by Long Zhang, Guoman Huang, Yutong Li, Shucheng Yang, Lijun Lu and Wenhao Huo
Remote Sens. 2023, 15(19), 4885; https://doi.org/10.3390/rs15194885 - 9 Oct 2023
Cited by 3 | Viewed by 2238
Abstract
The main core of InSAR (interferometric synthetic aperture radar) data processing is phase unwrapping, and the output has a direct impact on the quality of the data processing products. Noise introduced from the SAR system and interferometric processing is unavoidable, causing local phase [...] Read more.
The main core of InSAR (interferometric synthetic aperture radar) data processing is phase unwrapping, and the output has a direct impact on the quality of the data processing products. Noise introduced from the SAR system and interferometric processing is unavoidable, causing local phase inaccuracy and limiting the unwrapping results of traditional unwrapping methods. With the successful implementation of deep learning in a variety of industries in recent years, new concepts for phase unwrapping have emerged. This research offers a one-step InSAR phase unwrapping method based on an improved pix2pix network model. We achieved our aim by upgrading the pix2pix network generator model and introducing the concept of quality map guidance. Experiments on InSAR phase unwrapping utilizing simulated and real data with different noise intensities were carried out to compare the method with other unwrapping methods. The experimental results demonstrated that the proposed method is superior to other unwrapping methods and has a good robustness to noise. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The pix2pix network.</p>
Full article ">Figure 2
<p>The generator structure.</p>
Full article ">Figure 3
<p>Schematic issue diagram of the ASPP grid problems.</p>
Full article ">Figure 4
<p>Atrous spatial pyramid pooling module with reasonable sampling settings.</p>
Full article ">Figure 5
<p>The residual module.</p>
Full article ">Figure 6
<p>The framework of the discriminator.</p>
Full article ">Figure 7
<p>Generation of the simulated InSAR phase unwrapping datasets: (<b>a</b>) simulated topographic phase image; (<b>b</b>) simulated atmospheric phase image; (<b>c</b>) phase rewrapping map; (<b>d</b>) phase rewrapping map after noise; and (<b>e</b>) map merge.</p>
Full article ">Figure 7 Cont.
<p>Generation of the simulated InSAR phase unwrapping datasets: (<b>a</b>) simulated topographic phase image; (<b>b</b>) simulated atmospheric phase image; (<b>c</b>) phase rewrapping map; (<b>d</b>) phase rewrapping map after noise; and (<b>e</b>) map merge.</p>
Full article ">Figure 8
<p>Generation of the real InSAR phase unwrapping datasets: (<b>a</b>) the InSAR wrapped phase image of a certain area; (<b>b</b>) unwrapped phase image for the MCF method; and (<b>c</b>) map merge.</p>
Full article ">Figure 9
<p>Simulated wrapped phase and true phase: (<b>a</b>) unwrapped phase and (<b>b</b>) wrapped phase.</p>
Full article ">Figure 10
<p>Unwrapping results of simulated data: (<b>a</b>) unwrapped phase and error diagram of quality guide method; (<b>b</b>) unwrapped phase and error diagram of LS method; (<b>c</b>) unwrapped phase and error diagram of MCF method; (<b>d</b>) unwrapped phase and error diagram of U-net method; (<b>e</b>) unwrapped phase and error diagram of pix2pix method; and (<b>f</b>) unwrapped phase and error diagram of proposed method.</p>
Full article ">Figure 10 Cont.
<p>Unwrapping results of simulated data: (<b>a</b>) unwrapped phase and error diagram of quality guide method; (<b>b</b>) unwrapped phase and error diagram of LS method; (<b>c</b>) unwrapped phase and error diagram of MCF method; (<b>d</b>) unwrapped phase and error diagram of U-net method; (<b>e</b>) unwrapped phase and error diagram of pix2pix method; and (<b>f</b>) unwrapped phase and error diagram of proposed method.</p>
Full article ">Figure 11
<p>Real wrapped phase and its unwrapped phase of experimental data 1: (<b>a</b>) unwrapped phase and (<b>b</b>) wrapped phase.</p>
Full article ">Figure 12
<p>Unwrapping result of real experimental data 1: (<b>a</b>) unwrapped phase and error diagram of quality guide method; (<b>b</b>) unwrapped phase and error diagram of LS method; (<b>c</b>) unwrapped phase and error diagram of MCF method; (<b>d</b>) unwrapped phase and error diagram of U-net method; (<b>e</b>) unwrapped phase and error diagram of pix2pix method; and (<b>f</b>) unwrapped phase and error diagram of proposed method.</p>
Full article ">Figure 12 Cont.
<p>Unwrapping result of real experimental data 1: (<b>a</b>) unwrapped phase and error diagram of quality guide method; (<b>b</b>) unwrapped phase and error diagram of LS method; (<b>c</b>) unwrapped phase and error diagram of MCF method; (<b>d</b>) unwrapped phase and error diagram of U-net method; (<b>e</b>) unwrapped phase and error diagram of pix2pix method; and (<b>f</b>) unwrapped phase and error diagram of proposed method.</p>
Full article ">Figure 13
<p>Real wrapped phase and its unwrapped phase of experimental data 2: (<b>a</b>) unwrapped phase and (<b>b</b>) wrapped phase.</p>
Full article ">Figure 14
<p>Unwrapping result of real experimental data 2: (<b>a</b>) unwrapped phase and error diagram of quality guide method; (<b>b</b>) unwrapped phase and error diagram of LS method; (<b>c</b>) unwrapped phase and error diagram of MCF method; (<b>d</b>) unwrapped phase and error diagram of U-net method; (<b>e</b>) unwrapped phase and error diagram of pix2pix method; and (<b>f</b>) unwrapped phase and error diagram of proposed method.</p>
Full article ">Figure 14 Cont.
<p>Unwrapping result of real experimental data 2: (<b>a</b>) unwrapped phase and error diagram of quality guide method; (<b>b</b>) unwrapped phase and error diagram of LS method; (<b>c</b>) unwrapped phase and error diagram of MCF method; (<b>d</b>) unwrapped phase and error diagram of U-net method; (<b>e</b>) unwrapped phase and error diagram of pix2pix method; and (<b>f</b>) unwrapped phase and error diagram of proposed method.</p>
Full article ">Figure 15
<p>The wrapping error curves with various noise levels on real data and unwrapping results corresponding to different methods: (<b>a</b>) wrapped phase with different noise intensity; (<b>b</b>) unwrapped phase of quality guide method; (<b>c</b>) unwrapped phase of LS method; (<b>d</b>) unwrapped phase of MCF method; (<b>e</b>) unwrapped phase of U-net method; (<b>f</b>) unwrapped phase pix2pix method; and (<b>g</b>) unwrapped phase of proposed method.</p>
Full article ">Figure 15 Cont.
<p>The wrapping error curves with various noise levels on real data and unwrapping results corresponding to different methods: (<b>a</b>) wrapped phase with different noise intensity; (<b>b</b>) unwrapped phase of quality guide method; (<b>c</b>) unwrapped phase of LS method; (<b>d</b>) unwrapped phase of MCF method; (<b>e</b>) unwrapped phase of U-net method; (<b>f</b>) unwrapped phase pix2pix method; and (<b>g</b>) unwrapped phase of proposed method.</p>
Full article ">Figure 16
<p>The unwrapping error curves with various noise levels on real data.</p>
Full article ">
24 pages, 33279 KiB  
Article
Spatiotemporal Analysis of Landscape Ecological Risk and Driving Factors: A Case Study in the Three Gorges Reservoir Area, China
by Zhiyi Yan, Yunqi Wang, Zhen Wang, Churui Zhang, Yujie Wang and Yaoming Li
Remote Sens. 2023, 15(19), 4884; https://doi.org/10.3390/rs15194884 - 9 Oct 2023
Cited by 18 | Viewed by 1998
Abstract
Landscape ecological risk is considered the basis for regional ecosystem management decisions. Thus, it is essential to understand the spatial and temporal evolutionary patterns and drivers of landscape ecological risk. However, existing studies lack exploration of the long-term time series and driving mechanisms [...] Read more.
Landscape ecological risk is considered the basis for regional ecosystem management decisions. Thus, it is essential to understand the spatial and temporal evolutionary patterns and drivers of landscape ecological risk. However, existing studies lack exploration of the long-term time series and driving mechanisms of landscape ecological risk. Based on multi-type remote sensing data, this study assesses landscape pattern changes and ecological risk in the Three Gorges Reservoir Area from 1990 to 2020 and ranks the driving factors using a geographical detector. We then introduce the geographically weighted regression model to explore the local spatial contributions of driving factors. Our results show: (1) From 1990 to 2020, the agricultural land decreased, while forest and construction land expanded in the Three Gorges Reservoir Area. The overall landscape pattern shifted toward aggregation. (2) The landscape ecological risk exhibited a decreasing trend. The areas with relatively high landscape ecological risk were primarily concentrated in the main urban area in the western region of the Three Gorges Reservoir Area and along the Yangtze River, with apparent spatial aggregation. (3) Social and natural factors affected landscape ecological risk. The main driving factors were human interference, annual average temperature, population density, and annual precipitation; interactions occurred between the drivers. (4) The influence of driving factors on landscape ecological risk showed spatial heterogeneity. Spatially, the influence of social factors (human interference and population density) on landscape ecological risk was primarily positively correlated. Meanwhile, the natural factors’ (annual average temperature and annual precipitation) influence on landscape ecological risk varied widely in spatial distribution, and the driving mechanisms were more complex. This study provides a scientific basis and reference for landscape ecological risk management, land use policy formulation, and optimization of ecological security patterns. Full article
(This article belongs to the Topic Aquatic Environment Research for Sustainable Development)
Show Figures

Figure 1

Figure 1
<p>Geographic location of the TGRA in China.</p>
Full article ">Figure 2
<p>Overall study framework.</p>
Full article ">Figure 3
<p>Land use transfer from 1990 to 2020. Note: The percentages represent the proportion of each land type to the total area.</p>
Full article ">Figure 4
<p>Landscape indices at the landscape level from 1990 to 2020. ((<b>a</b>) is dispersion class index and diversity index. (<b>b</b>) is density class index and shape class index.) Note: CONTAG: contagion index, PLADJ: proportion of like adjacencies index, SHDI: Shannon’s diversity index, PD: patch density index, ED: edge density index, LPI: largest patch index, LSI: landscape shape index.</p>
Full article ">Figure 5
<p>Landscape indices at the class level from 1990 to 2020. Note: PLAND: percentage of landscape index, LSI: landscape shape index, PD: patch density index, ED: edge density index, LPI: largest patch index, AI: aggregation index.</p>
Full article ">Figure 6
<p>Spatiotemporal variation of landscape ecological risk in the TGRA.</p>
Full article ">Figure 7
<p>Landscape ecological risk level transfer in the TGRA.</p>
Full article ">Figure 8
<p>Spatial distribution of landscape ecological risk level transfer in the TGRA.</p>
Full article ">Figure 9
<p>ERI centroid and standard deviation ellipse changes in the TGRA. Note: ERI: Landscape ecological risk index.</p>
Full article ">Figure 10
<p>Spatial distribution of hot spots and cold spots of the ERI in the TGRA. Note: ERI: Landscape ecological risk index.</p>
Full article ">Figure 11
<p>Factor detector results for each indicator. Note: DEM: digital elevation model, TEM: annual average temperature, PRE: annual precipitation, WD: distance to water body, CD: distance to construction land, NL: annual artificial night light, POP: population density, HI: human interference, average: average contribution of each driving factor on landscape ecological risk from 1990 to 2020, q-value: contribution of drivers to landscape ecological risk.</p>
Full article ">Figure 12
<p>Interaction detector results for each indicator. Note: DEM: digital elevation model, TEM: annual average temperature, PRE: annual precipitation, WD: distance to water body, CD: distance to construction land, NL: annual artificial night light, POP: population density, HI: human interference, q-value: contribution of two-factor interactions to landscape ecological risk.</p>
Full article ">Figure 13
<p>Spatial variability of the regression coefficients between TEM and ERI. Note: TEM: annual average temperature, ERI: Landscape ecological risk index.</p>
Full article ">Figure 14
<p>Spatial variability of the regression coefficients between PRE and ERI. Note: PRE: annual precipitation, ERI: Landscape ecological risk index.</p>
Full article ">Figure 15
<p>Spatial variability of the regression coefficients between POP and ERI. Note: POP: population density, ERI: Landscape ecological risk index.</p>
Full article ">Figure 16
<p>Spatial variability of the regression coefficients between HI and ERI. Note: HI: human interference, ERI: Landscape ecological risk index.</p>
Full article ">
28 pages, 24166 KiB  
Article
Semi-Supervised Learning Method for the Augmentation of an Incomplete Image-Based Inventory of Earthquake-Induced Soil Liquefaction Surface Effects
by Adel Asadi, Laurie Gaskins Baise, Christina Sanon, Magaly Koch, Snehamoy Chatterjee and Babak Moaveni
Remote Sens. 2023, 15(19), 4883; https://doi.org/10.3390/rs15194883 - 9 Oct 2023
Cited by 5 | Viewed by 2655
Abstract
Soil liquefaction often occurs as a secondary hazard during earthquakes and can lead to significant structural and infrastructure damage. Liquefaction is most often documented through field reconnaissance and recorded as point locations. Complete liquefaction inventories across the impacted area are rare but valuable [...] Read more.
Soil liquefaction often occurs as a secondary hazard during earthquakes and can lead to significant structural and infrastructure damage. Liquefaction is most often documented through field reconnaissance and recorded as point locations. Complete liquefaction inventories across the impacted area are rare but valuable for developing empirical liquefaction prediction models. Remote sensing analysis can be used to rapidly produce the full spatial extent of liquefaction ejecta after an event to inform and supplement field investigations. Visually labeling liquefaction ejecta from remotely sensed imagery is time-consuming and prone to human error and inconsistency. This study uses a partially labeled liquefaction inventory created from visual annotations by experts and proposes a pixel-based approach to detecting unlabeled liquefaction using advanced machine learning and image processing techniques, and to generating an augmented inventory of liquefaction ejecta with high spatial completeness. The proposed methodology is applied to aerial imagery taken from the 2011 Christchurch earthquake and considers the available partial liquefaction labels as high-certainty liquefaction features. This study consists of two specific comparative analyses. (1) To tackle the limited availability of labeled data and their spatial incompleteness, a semi-supervised self-training classification via Linear Discriminant Analysis is presented, and the performance of the semi-supervised learning approach is compared with supervised learning classification. (2) A post-event aerial image with RGB (red-green-blue) channels is used to extract color transformation bands, statistical indices, texture components, and dimensionality reduction outputs, and performances of the classification model with different combinations of selected features from these four groups are compared. Building footprints are also used as the only non-imagery geospatial information to improve classification accuracy by masking out building roofs from the classification process. To prepare the multi-class labeled data, regions of interest (ROIs) were drawn to collect samples of seven land cover and land use classes. The labeled samples of liquefaction were also clustered into two groups (dark and light) using the Fuzzy C-Means clustering algorithm to split the liquefaction pixels into two classes. A comparison of the generated maps with fully and manually labeled liquefaction data showed that the proposed semi-supervised method performs best when selected high-ranked features of the two groups of statistical indices (gradient weight and sum of the band squares) and dimensionality reduction outputs (first and second principal components) are used. It also outperforms supervised learning and can better augment the liquefaction labels across the image in terms of spatial completeness. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the two aerial image tiles used in this study in Christchurch, New Zealand. The resolution of the images is 10 cm, and each tile is 720-by-480 m in size. The image tiles are all projected to the New Zealand Transverse Mercator (NZTM) system. The left and right images are shown in detail via <a href="#remotesensing-15-04883-f002" class="html-fig">Figure 2</a>a,b, respectively.</p>
Full article ">Figure 2
<p>(<b>a</b>) The image tile used for model development and evaluation. (<b>b</b>) The image tile used for further model application. Both tiles have a resolution of 10 cm, and each tile is 720-by-480 m in size. Linear stretching is performed on the image for better visualization contrast. Row and column are indicators of pixel number in each axis of the image shown in the figure.</p>
Full article ">Figure 3
<p>(<b>a</b>) The image tile used for model development and evaluation with Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] liquefaction polygons (in red) overlayed. (<b>b</b>) The image tile with a complete label (in magenta) was created for model validation in this study. (<b>c</b>) The image tile used for model application with Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] liquefaction polygons (in red) overlayed. Row and column are indicators of pixel number in each axis of the image shown in the figure.</p>
Full article ">Figure 4
<p>Building footprints (in yellow) provided by LINZ [<a href="#B44-remotesensing-15-04883" class="html-bibr">44</a>], used to mask out the buildings in this study for (<b>a</b>) model development tile, and (<b>b</b>) model application tile. The version shown in this figure was modified by adding a few polygons for missing buildings and removing some polygons for which no building was observed in the imagery. Row and column are indicators of pixel number in each axis of the image shown in the figure.</p>
Full article ">Figure 5
<p>Flowchart of the proposed method, showing how multi-band multi-class labeled data are created and fed to the semi-supervised learning method to complete the partially labeled liquefaction data [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) An example of the liquefaction ejecta polygons drawn by Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>]. (<b>b</b>) Fuzzy C-Means clustering results for the example image tile with dark and light liquefaction. The dark red and orange colors are indicators of dark and light liquefaction, respectively. Uncertain pixels are removed from liquefaction training data based on their low probability of belonging to any of the dark or light liquefaction classes.</p>
Full article ">Figure 7
<p>The drawn ROIs to collect training data for different land use and land cover classes on the two used tiles in this paper, including trees (dark green), vegetation (light green), soil (light brown), shadow (pink), water (light blue), roads (black), and pavements/driveways (dark blue). Row and column are indicators of pixel number in each axis of the image shown in the figure. (<b>a</b>) Model development tile, and (<b>b</b>) Model application tile.</p>
Full article ">Figure 8
<p>Comparative box plots of the individual RGB channels per class. The mean of RGB channels of each class is also calculated and plotted on the figure using magenta color.</p>
Full article ">Figure 9
<p>Features extraction outputs via color transformation techniques. (<b>a</b>) The RGB image with Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] polygons (in red) and validation polygons used in this study (in magenta). (<b>b</b>–<b>d</b>) Hue, Saturation, and Value bands of HSV transformation, respectively. (<b>e</b>–<b>g</b>) Decorrelation stretch bands 1, 2 and 3, respectively. (<b>h</b>) Cyan. (<b>i</b>) Magenta. (<b>j</b>) Yellow. (<b>k</b>) Black.</p>
Full article ">Figure 10
<p>Feature extraction outputs via texture analysis techniques. (<b>a</b>) The RGB image with Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] polygons (in red) and validation polygons used in this study (in magenta). (<b>b</b>–<b>e</b>) Gabor filters generated at 4 orientations (0, 45, 90, and 135 degrees, respectively) with a wavelength of 5 pixels/cycle. (<b>f</b>) Approximation coefficients of the two-dimensional Haar wavelet transform with symmetric extension mode (half point): boundary value symmetric replication. (<b>g</b>) Convolution filter. (<b>h</b>) Correlation filter.</p>
Full article ">Figure 11
<p>Feature extraction outputs via statistical indices. (<b>a</b>) The RGB image with Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] polygons (in red) and validation polygons used in this study (in magenta). (<b>b</b>) Entropy filter. (<b>c</b>) Gradient weight. (<b>d</b>) Standard deviation filter. (<b>e</b>) Range filter. (<b>f</b>) Mean absolute deviation. (<b>g</b>) Pixel variance. (<b>h</b>) Sum of squares.</p>
Full article ">Figure 12
<p>Feature extraction outputs via dimensionality reduction techniques. (<b>a</b>) The RGB image with Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] polygons (in red) and validation polygons used in this study (in magenta). (<b>b</b>) Grayscale image. (<b>c</b>–<b>e</b>) First, second, and third bands of PCA method’s output, respectively. (<b>f</b>–<b>h</b>) First, second, and third bands of MNF method’s output, respectively.</p>
Full article ">Figure 13
<p>Binary classification accuracy results of the proposed semi-supervised model using different combination of features, which were calculated by comparing the binarized classification labels with the validation liquefaction labels shown in <a href="#remotesensing-15-04883-f003" class="html-fig">Figure 3</a>b. The darker green color in the heatmap table indicates superior performance by the model.</p>
Full article ">Figure 14
<p>Binary classification accuracy results of the proposed semi-supervised model versus the supervised model using the preferred combination of features, which were calculated by comparing the binarized classification labels with the validation liquefaction labels shown in <a href="#remotesensing-15-04883-f003" class="html-fig">Figure 3</a>b. The darker green color in the heatmap table indicates superior performance by the model.</p>
Full article ">Figure 15
<p>(<b>a</b>) Binary classification results of Model 6 (preferred model) with Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] liquefaction polygons (in red) overlaying the model prediction (in yellow). (<b>b</b>) Model 6 output (in yellow) compared with validation labels (in magenta). (<b>c</b>) Supervised classification output with validation labels overlayed. (<b>d</b>) Model 1 (RGB-based model) classification output with validation labels overlayed. Row and column are indicators of pixel number in each axis of the image shown in the figure.</p>
Full article ">Figure 16
<p>Binary classification results of Model 1 (RGB-based model in the middle column) and Model 6 (preferred model in the right column) with Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] liquefaction polygons (in thick red) and validation labels (in magenta) overlaying the model prediction (in yellow). (<b>a</b>–<b>e</b>) Example regions in the model development tile visualized for evaluation of the predictions.</p>
Full article ">Figure 17
<p>The model application image tile with binary classification results of Model 6 (preferred model in the right column) and Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] liquefaction polygons (in thick red). Row and column are indicators of pixel number in each axis of the image shown in the figure.</p>
Full article ">Figure 18
<p>The model application example images with binary classification results of Model 6 (preferred model in the right column) and Sanon et al. (2022) [<a href="#B43-remotesensing-15-04883" class="html-bibr">43</a>] liquefaction polygons (in thick red).</p>
Full article ">
17 pages, 5526 KiB  
Article
Record Low Arctic Stratospheric Ozone in Spring 2020: Measurements of Ground-Based Differential Optical Absorption Spectroscopy in Ny-Ålesund during 2017–2021
by Qidi Li, Yuhan Luo, Yuanyuan Qian, Ke Dou, Fuqi Si and Wenqing Liu
Remote Sens. 2023, 15(19), 4882; https://doi.org/10.3390/rs15194882 - 9 Oct 2023
Cited by 1 | Viewed by 1228
Abstract
The Arctic stratospheric ozone depletion event in spring 2020 was the most severe compared with previous years. We retrieved the critical indicator ozone vertical column density (VCD) using zenith scattered light differential optical absorption spectroscopy (ZSL-DOAS) from March 2017 to September 2021 in [...] Read more.
The Arctic stratospheric ozone depletion event in spring 2020 was the most severe compared with previous years. We retrieved the critical indicator ozone vertical column density (VCD) using zenith scattered light differential optical absorption spectroscopy (ZSL-DOAS) from March 2017 to September 2021 in Ny-Ålesund, Svalbard, Norway. The average ozone VCD over Ny-Ålesund between 18 March and 18 April 2020 was approximately 274.8 Dobson units (DU), which was only 64.7 ± 0.1% of that recorded in other years (2017, 2018, 2019, and 2021). The daily peak difference was 195.7 DU during this period. The retrieved daily averages of ozone VCDs were compared with satellite observations from the Global Ozone Monitoring Experiment-2 (GOME-2), a Brewer spectrophotometer, and a Système d’Analyze par Observation Zénithale (SAOZ) spectrometer at Ny-Ålesund. As determined using the empirical cumulative density function, ozone VCDs from the ZSL-DOAS dataset were strongly correlated with data from the GOME-2 and SAOZ at lower and higher values, and ozone VCDs from the Brewer instrument were overestimated. The resulting Pearson correlation coefficients were relatively high at 0.97, 0.87, and 0.91, respectively. In addition, the relative deviations were 2.3%, 3.1%, and 3.5%, respectively. Sounding and ERA5 data indicated that severe ozone depletion occurred between mid-March and mid-April 2020 in the 16–20 km altitude range over Ny-Ålesund, which was strongly associated with the overall persistently low temperatures in the winter of 2019/2020. Using ZSL-DOAS observations, we obtained ozone VCDs and provided evidence for the unprecedented ozone depletion during the Arctic spring of 2020. This is essential for the study of polar ozone changes and their effect on climate change and ecological conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Ground-based ZSL-DOAS instrument and experimental site in Ny-Ålesund.</p>
Full article ">Figure 2
<p>Spectrum fits of ozone on 31 March 2020.</p>
Full article ">Figure 3
<p>The linear fit between the ozone dSCDs and AMFs on 31 March 2020.</p>
Full article ">Figure 4
<p>Ozone VCDs over Ny-Ålesund from ZSL-DOAS, GOME-2, Brewer, and SAOZ.</p>
Full article ">Figure 5
<p>Ozone data over Ny-Ålesund for 2020 and the average ozone data (black) for 2017, 2018, 2019, and 2021.</p>
Full article ">Figure 6
<p>ECDF plots of ozone VCDs from ZSL-DOAS with (<b>a</b>) GOME-2, (<b>b</b>) Brewer, and (<b>c</b>) SAOZ. Boxplots of ozone VCDs from ZSL-DOAS with (<b>d</b>) GOME-2, (<b>e</b>) Brewer, and (<b>f</b>) SAOZ. In the boxplots, the black central bar indicates the median and the white triangle indicates the mean value.</p>
Full article ">Figure 7
<p>Scatter plots and linear fits of the retrieved ozone VCDs with (<b>a</b>) GOME-2, (<b>b</b>) Brewer, and (<b>c</b>) SAOZ.</p>
Full article ">Figure 8
<p>The ozone profiles, above Ny-Ålesund, from January to July of each year for (<b>a</b>) 2017, (<b>c</b>) 2018, (<b>e</b>) 2019, (<b>g</b>) 2020, and (<b>i</b>) 2021, and the temperature profiles for (<b>b</b>) 2017, (<b>d</b>) 2018, (<b>f</b>) 2019, (<b>h</b>) 2020, and (<b>j</b>) 2021.</p>
Full article ">Figure 9
<p>Temperatures (at 70 hPa) over Ny-Ålesund from November 2016 to September 2021; the blue line denotes the threshold temperature for the formation of PSCs.</p>
Full article ">Figure A1
<p>The ozone profiles, above Ny-Ålesund, from 9 January to 1 July 2020 from (<b>a</b>) ozonesonde and (<b>b</b>) ERA5, and (<b>c</b>) the relative differences between ozonesonde and ERA5.</p>
Full article ">
20 pages, 4578 KiB  
Article
Novel Compact Polarized Martian Wind Imaging Interferometer
by Chunmin Zhang, Yanqiang Wang, Biyun Zhang, Tingyu Yan, Zeyu Chen and Zhengyi Chen
Remote Sens. 2023, 15(19), 4881; https://doi.org/10.3390/rs15194881 - 9 Oct 2023
Cited by 2 | Viewed by 1565
Abstract
The Mars Atmospheric Wind Imaging Interferometer offers several advantages, notably its high throughput, enabling the acquisition of precise and high vertical resolution data on the temperature and wind fields in the Martian atmosphere. Considering the current absence of such an Interferometer, this paper [...] Read more.
The Mars Atmospheric Wind Imaging Interferometer offers several advantages, notably its high throughput, enabling the acquisition of precise and high vertical resolution data on the temperature and wind fields in the Martian atmosphere. Considering the current absence of such an Interferometer, this paper introduces a novel Mars wind field imaging interferometer. In analyzing the photochemical model of O2 (a1Δg) 1.27 μm molecular airglow radiation in the Martian atmosphere and considering the impact of instrument signal-to-noise ratio (SNR), we have chosen an optical path difference (OPD) of 8.6 cm for the interferometer. The all-solid-state polarized wind imaging interferometer is miniaturized by incorporating two arm glasses as the compensation medium in its construction, achieving the effects of field-widening and temperature compensation. Additionally, an F-P Etalon is designed to selectively filter the desired three spectral lines of O2 dayglow, and its effect is evaluated through simulations. The accuracy of the proposed compact Mars polarized wind imaging interferometer for detecting Mars’ wind field and temperature field has been validated through rigorous theoretical derivation and comprehensive computer simulations. The interferometer boasts several advantages, including its compact and small size, static stability, minimal stray light, and absence of moving parts. It establishes the theoretical, technological, and instrumental engineering foundations for future simultaneous static measurement of Martian global atmospheric wind fields, temperature fields, and ozone concentrations from spacecraft, thereby significantly contributing to the dataset for investigating Martian atmospheric dynamics. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of polarized Martian wind imaging interferometer. P<sub>1</sub> represents a linear polarizer, and the red arrow indicates that its transmission axis is at a 45° angle to the x-axis. P<sub>2</sub> refers to an array of linear polarizers, consisting of four sub-linear polarizers with their transmission axes at angles of 0°, 45°, 90°, and 135° relative to the x-axis, respectively.</p>
Full article ">Figure 2
<p>Limb imaging geometry (from the orbit) [<a href="#B3-remotesensing-15-04881" class="html-bibr">3</a>].</p>
Full article ">Figure 3
<p>Wind velocity error variations with OPD and SNR.</p>
Full article ">Figure 4
<p>Relative intensity of F-P etalon as a function of the incident angle for the three O<sub>2</sub> spectral lines. The three dashed lines represent the relative intensities of the three selected dayglow spectral lines used for Mars atmospheric wind field measurement after passing through the F-P Etalon. The solid line represents the total intensity or the envelope line of these three spectral lines.</p>
Full article ">Figure 5
<p>Simulated image of O<sub>2</sub> three spectral lines within the instrument’s field of view. The three circular rings correspond to the selected three independent dayglow lines.</p>
Full article ">Figure 6
<p>The ray tracing of pyramid prism modeled using Zemax optics software: (<b>a</b>) the optical layout; (<b>b</b>) the polarization pupil map for 45° linearly polarization incidence; (<b>c</b>) the polarization pupil map for circularly polarized incidence.</p>
Full article ">Figure 7
<p>Simulated interferograms with the standard “Four-phase step” for zero wind velocity on the CCD detector. The phase steps are (<b>a</b>) 0°, (<b>b</b>) 90°, (<b>c</b>) 180°, and (<b>d</b>) 270°, respectively. The grayscale values are relative units, and the CCD is a 12-bit gray level.</p>
Full article ">Figure 8
<p>Simulated interferograms of the Martian wind field with the standard “Four-phase step” on CCD detector. The phase steps are (<b>a</b>) 0°, (<b>b</b>) 90°, (<b>c</b>) 180°, and (<b>d</b>) 270°, respectively.</p>
Full article ">Figure 9
<p>Martian atmospheric wind velocity images. (<b>a</b>) Typical original Martian wind velocity image (the solar longitude is approximately 65°, the local time is 0:00 h, the altitude is around 30 km, and presence of dust is not clear) [<a href="#B45-remotesensing-15-04881" class="html-bibr">45</a>]. (<b>b</b>) Wind velocity image directly retrieved from the four interferograms with noise. (<b>c</b>) Wind velocity image retrieved from the denoised interferograms. (<b>d</b>) Wind velocity retrieval error.</p>
Full article ">Figure 10
<p>Martian atmospheric temperature Images. (<b>a</b>) Typical original Martian temperature image (the solar longitude is approximately 65°, the local time is 0:00 h, presence of dust is not clear) [<a href="#B45-remotesensing-15-04881" class="html-bibr">45</a>]. (<b>b</b>) Temperature image directly retrieved from the four interferograms with noise. (<b>c</b>) Temperature image retrieved from the denoised interferograms. (<b>d</b>) Temperature retrieval error.</p>
Full article ">
22 pages, 8432 KiB  
Article
Drone Photogrammetry for Accurate and Efficient Rock Joint Roughness Assessment on Steep and Inaccessible Slopes
by Jiamin Song, Shigui Du, Rui Yong, Changshuo Wang and Pengju An
Remote Sens. 2023, 15(19), 4880; https://doi.org/10.3390/rs15194880 - 9 Oct 2023
Cited by 6 | Viewed by 2224
Abstract
The roughness of rock joints exerts a substantial influence on the mechanical behavior of rock masses. In order to identify potential failure mechanisms and to design effective protection measures, the accurate measurement of joint roughness is essential. Traditional methods, such as contact profilometry, [...] Read more.
The roughness of rock joints exerts a substantial influence on the mechanical behavior of rock masses. In order to identify potential failure mechanisms and to design effective protection measures, the accurate measurement of joint roughness is essential. Traditional methods, such as contact profilometry, laser scanning, and close-range photogrammetry, encounter difficulties when assessing steep and inaccessible slopes, thus hindering the safety and precision of data collection. This study aims to assess the feasibility of utilizing drone photogrammetry to quantify the roughness of rock joints on steep and inaccessible slopes. Field experiments were conducted, and the results were compared to those of 3D laser scanning in order to validate the approach’s procedural details, applicability, and measurement accuracy. Under a 3 m image capture distance using drone photogrammetry, the root mean square error of the multiscale model-to-model cloud comparison (M3C2) distance and the average roughness measurement error were less than 0.5 mm and 10%, respectively. The results demonstrate the feasibility and potential of drone photogrammetry for joint roughness measurement challenges, providing a useful tool for practitioners and researchers pursuing innovative solutions for assessing rock joint roughness on precipitous and hazardous slopes. Full article
(This article belongs to the Special Issue Rockfall Hazard Analysis Using Remote Sensing Techniques)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the method proposed for determining rock joint roughness on a slope using drone photogrammetry.</p>
Full article ">Figure 2
<p>Study area (<b>a</b>) location of Yunnan on a map of China, (<b>b</b>) location of the Lanping open pit mine in Yunnan province, (<b>c</b>) top view of the Lanping open pit mine, and (<b>d</b>) joint surfaces and sites in the process of study with drone photography.</p>
Full article ">Figure 3
<p>Scale bar, (<b>a</b>) the scale bar placed at the region of interest and (<b>b</b>) the distance between two adjacent coded targets. The horizontal distance is 89.30 mm, and the vertical distance is 13.27 mm.</p>
Full article ">Figure 4
<p>Point cloud acquisition using a 3D laser scanner, (<b>a</b>) scanning process, and (<b>b</b>) point markers fixed on the joint surface.</p>
Full article ">Figure 5
<p>Point clouds generated using drone photogrammetry with different shooting distances: (<b>a</b>) 3 m, (<b>b</b>) 6 m, and (<b>c</b>) 15 m.</p>
Full article ">Figure 6
<p>M3C2 distance between the DP model and LS model. (<b>a</b>–<b>g</b>) refer to models DP-3-lowest, DP-3-low, DP-3-medium, DP-3-high, DP-3-ultrahigh, DP-6-ultrahigh, and DP-15-ultrahigh, respectively.</p>
Full article ">Figure 7
<p>2D Profiles extracted from DP and LS models: (<b>a</b>) positions of the five profiles; (<b>b</b>) profiles of p1–p5.</p>
Full article ">Figure 8
<p>Roughness results for the 2D profiles with different sampling intervals and roughness parameters. (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) correspond to the LS profiles, while (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) correspond to the DP profiles.</p>
Full article ">Figure 9
<p>Joint roughness measurement error at different point spacings with drone photogrammetry.</p>
Full article ">Figure 10
<p>The point cloud model is cropped to different sizes using the center enlargement method. (<b>a</b>) The plane schematic diagram, wherein the blue dotted line signifies the cropped samples, and the yellow line represents the rectangular coordinates. (<b>b</b>) The point cloud model with a size of 10 cm × 10 cm, and (<b>c</b>) the point cloud model with a size of 100 cm × 100 cm.</p>
Full article ">Figure 11
<p>Results for 3D joint roughness from the LS models and DP models at different scales; (<b>a</b>–<b>j</b>) represent the different joint scales, ranging from 10 cm × 10 cm to 100 cm × 100 cm.</p>
Full article ">Figure 12
<p>The effect of joint scale on the 3D roughness.</p>
Full article ">Figure 13
<p>3D roughness measurement error of drone photogrammetry in different joint surface sizes.</p>
Full article ">
18 pages, 14118 KiB  
Article
Analysis of Ionospheric Anomalies before the Tonga Volcanic Eruption on 15 January 2022
by Jiandi Feng, Yunbin Yuan, Ting Zhang, Zhihao Zhang and Di Meng
Remote Sens. 2023, 15(19), 4879; https://doi.org/10.3390/rs15194879 - 9 Oct 2023
Cited by 11 | Viewed by 2536
Abstract
In this paper, GNSS stations’ observational data, global ionospheric maps (GIM) and the electron density of FORMOSAT-7/COSMIC-2 occultation are used to study ionospheric anomalies before the submarine volcanic eruption of Hunga Tonga–Hunga Ha’apai on 15 January 2022. (i) We detect the negative total [...] Read more.
In this paper, GNSS stations’ observational data, global ionospheric maps (GIM) and the electron density of FORMOSAT-7/COSMIC-2 occultation are used to study ionospheric anomalies before the submarine volcanic eruption of Hunga Tonga–Hunga Ha’apai on 15 January 2022. (i) We detect the negative total electron content (TEC) anomalies by three GNSS stations on 5 January before the volcanic eruption after excluding the influence of solar and geomagnetic disturbances and lower atmospheric forcing. The GIMs also detect the negative anomaly in the global ionospheric TEC only near the epicenter of the eruption on 5 January, with a maximum outlier exceeding 6 TECU. (ii) From 1 to 3 January (local time), the equatorial ionization anomaly (EIA) peak shifts significantly towards the Antarctic from afternoon to night. The equatorial ionization anomaly double peak decreases from 4 January, and the EIA double peak disappears and merges into a single peak on 7 January. Meanwhile, the diurnal maxima of TEC at TONG station decrease by nearly 10 TECU and only one diurnal maximum occurred on 4 January (i.e., 5 January of UT), but the significant ionospheric diurnal double-maxima (DDM) are observed on other dates. (iii) We find a maximum value exceeding NmF2 at an altitude of 100~130 km above the volcanic eruption on 5 January (i.e., a sporadic E layer), with an electron density of 7.5 × 105 el/cm3. Full article
(This article belongs to the Special Issue Ionosphere Monitoring with Remote Sensing II)
Show Figures

Figure 1

Figure 1
<p>Location of volcanic eruption in Tonga (red pentagram) and distribution of GNSS stations (black triangle).</p>
Full article ">Figure 2
<p>Geomagnetic and solar activity before and after the volcanic eruption from 1 to 16 January 2022. The red vertical line indicates the time of volcanic eruption (same below).</p>
Full article ">Figure 3
<p>Variations in ionospheric TEC over TONG, LAUT and SAMO stations from 1 to 16 January 2022.</p>
Full article ">Figure 4
<p>Ionospheric TEC anomalies detected using the sliding interquartile range method over TONG, LAUT and SAMO stations from 1 to 16 January 2022.</p>
Full article ">Figure 5
<p>Ionospheric TEC variations (GPS-TEC and NeuralProphet-TEC) over TONG, LAUT and SAMO stations from 1 to 16 January 2022.</p>
Full article ">Figure 6
<p>Ionospheric TEC anomalies detected using NeuralProphet over TONG, LAUT and SAMO stations on 1 to 16 January 2022.</p>
Full article ">Figure 7
<p>Cross-wavelet transform of TEC time series and spatial weather parameters from 1 to 16 January 2022 at TONG station. The closed area of the thick black line passes the standard red noise test at 95% confidence level, indicating the significance of the period; the cone of influence (COI) area below the thin black solid line is the area of wavelet transform data with large edge effects, and the thick red line indicates the moment of volcanic eruption.</p>
Full article ">Figure 8
<p>Wavelet coherence spectrum of TEC time series with space weather parameters at TONG station from 1 to 16 January 2022.</p>
Full article ">Figure 9
<p>Distribution of the global ionospheric TEC anomaly on 5 January, from 02:00 to 07:00 UT. The red pentagram representing the location of the eruption center and the black solid line indicating the magnetic equator (same below).</p>
Full article ">Figure 10
<p>Distribution of global ionospheric TEC on 5 January, from 00:00 to 05:00 UT.</p>
Full article ">Figure 11
<p>Latitude–time–TEC variations extracted along the 175°W longitude line. The red line is the latitudinal position of the eruption.</p>
Full article ">Figure 12
<p>TEC time series of KOKB, TONG and CHTI stations from 1 to 8 January (local time), with the 1st peaks, 2nd peaks and valleys of ionospheric DDM indicated by red, magenta and blue dots, respectively.</p>
Full article ">Figure 13
<p>(<b>a</b>) Ionospheric electron density profile near the volcanic eruption detected by FOR-MOSAT-7/COSMIC-2 1st satellite on 5 January. The red pentagram is the location of the eruption, and the line segment in (<b>b</b>) is the tangent point trajectory of the satellite with GNSS satellite.</p>
Full article ">Figure 14
<p>(<b>a</b>) Ionospheric electron density profile near the volcanic eruption detected by FOR-MOSAT-7/COSMIC-2 2nd satellite on 5 January. The red pentagram is the location of the eruption, and the line segment in (<b>b</b>) is the tangent point trajectory of the satellite with GNSS satellite. Ionospheric electron density profile from 90 to 150 km is shown in (<b>c</b>).</p>
Full article ">Figure 15
<p>(<b>a</b>) Ionospheric electron density profile near the volcanic eruption detected by FOR-MOSAT-7/COSMIC-2 4th satellite on 5 January. The red pentagram is the location of the eruption, and the line segment in (<b>b</b>) is the tangent point trajectory of the satellite with GNSS satellite. Ionospheric electron density profile from 90 to 150 km is shown in (<b>c</b>).</p>
Full article ">Figure 16
<p>Neutral wind variations from 50 to 100 km above eruption simulated by the HWM14 model during 5–10 January 2022.</p>
Full article ">Figure 17
<p>The global distribution of Global Ultraviolet Imager (GUVI)-measured O/N2 ratio during 5–10 January 2022.</p>
Full article ">
20 pages, 6432 KiB  
Article
A Multi-Domain Joint Novel Method for ISAR Imaging of Multi-Ship Targets
by Yangyang Zhang, Ning Xu, Ning Li and Zhengwei Guo
Remote Sens. 2023, 15(19), 4878; https://doi.org/10.3390/rs15194878 - 8 Oct 2023
Cited by 4 | Viewed by 1531
Abstract
As a key object on the ocean, regulating civilian and military ship targets more effectively is a very important part of maintaining maritime security. One of the ways to obtain high-resolution images of ship targets is the inverse synthetic aperture radar (ISAR) imaging [...] Read more.
As a key object on the ocean, regulating civilian and military ship targets more effectively is a very important part of maintaining maritime security. One of the ways to obtain high-resolution images of ship targets is the inverse synthetic aperture radar (ISAR) imaging technique. However, in the actual ISAR imaging process, ship targets in a formation often lead to complicated motion conditions. Due to the close distance between the ship targets, the rough imaging results of the targets cannot be completely separated in the image domain, and the small differences in motion parameters lead to overlapping phenomena in the Doppler history. Therefore, for situations in which ship formation targets with little difference in motion parameters are included in the same radar beam, this paper proposes a multi-domain joint ISAR separation imaging method for multi-ship targets. First, the method performs echo separation using the Hough transform (HT) with the minimum entropy autofocus method in the image domain. Secondly, the time–frequency curve is extracted in the time–frequency domain using the short-time Fourier transform (STFT) for time–frequency analysis, which solves the problem of the ship formation targets being aliased on both echo and Doppler history after range compression and achieves the purpose of separating the echo signals of the sub-ship targets with high accuracy. Eventually, better-focused images of each target are obtained via further motion compensation and precise imaging. Finally, the effectiveness of the proposed method is verified using a simulation and measured data. Full article
(This article belongs to the Special Issue Advances in SAR: Sensors, Methodologies, and Applications II)
Show Figures

Figure 1

Figure 1
<p>Multi-ship target geometry model.</p>
Full article ">Figure 2
<p>Flowchart of the proposed algorithm.</p>
Full article ">Figure 3
<p>Schematic diagram of the Hough transform. (<b>a</b>) Image-space results. (<b>b</b>) Parameter-space results.</p>
Full article ">Figure 4
<p>Imaging flowchart based on time–frequency analysis.</p>
Full article ">Figure 5
<p>Three-dimensional view of the ship model.</p>
Full article ">Figure 6
<p>One-dimensional range profile and time–frequency distribution of a multi-ship target. (<b>a</b>) One-dimensional range profile of a multi-ship target. (<b>b</b>) Time–frequency distribution of multi-ship targets.</p>
Full article ">Figure 7
<p>ISAR echo Hough transform results for two targets.</p>
Full article ">Figure 8
<p>Multi-target translational compensation results. (<b>a</b>) Sub-ship target 1 rough compensation results. (<b>b</b>) Sub-ship target 2 rough compensation results.</p>
Full article ">Figure 9
<p>Multi-target rough imaging results. (<b>a</b>) Sub-ship target 1 rough imaging results. (<b>b</b>) Sub-ship target 2 rough imaging results.</p>
Full article ">Figure 10
<p>Multi-target time–frequency distribution. (<b>a</b>) Time–frequency distribution of sub-ship target 1. (<b>b</b>) Time–frequency distribution of sub-ship target 2.</p>
Full article ">Figure 11
<p>Simulation data multi-ship target imaging results (<b>a</b>) Image domain-based sub-ship target 1 separation imaging results. (<b>b</b>) Separation imaging results based on image domain sub-ship target 2. (<b>c</b>) Separation imaging results based on time–frequency analysis of sub-ship target 1. (<b>d</b>) Separation imaging results based on time–frequency analysis of sub-ship target 2. (<b>e</b>) Separation imaging results of sub-ship target 1 of the method proposed in this paper. (<b>f</b>) Separation imaging results of sub-ship target 2 of the method proposed in this paper.</p>
Full article ">Figure 11 Cont.
<p>Simulation data multi-ship target imaging results (<b>a</b>) Image domain-based sub-ship target 1 separation imaging results. (<b>b</b>) Separation imaging results based on image domain sub-ship target 2. (<b>c</b>) Separation imaging results based on time–frequency analysis of sub-ship target 1. (<b>d</b>) Separation imaging results based on time–frequency analysis of sub-ship target 2. (<b>e</b>) Separation imaging results of sub-ship target 1 of the method proposed in this paper. (<b>f</b>) Separation imaging results of sub-ship target 2 of the method proposed in this paper.</p>
Full article ">Figure 12
<p>Measured data rage compressed echo.</p>
Full article ">Figure 13
<p>Measured data multi-ship target imaging results. (<b>a</b>) Sub-target 1 translation compensation results. (<b>b</b>) Sub-target 2 translation compensation results. (<b>c</b>) Extracted sub-target 1 echo. (<b>d</b>) Extracted sub-target 2 echo.</p>
Full article ">Figure 14
<p>Measured data multi-ship target imaging results. (<b>a</b>) Image domain-based sub-ship target 1 separation imaging results. (<b>b</b>) Separation imaging results based on image domain sub-ship target 2. (<b>c</b>) Separation imaging results based on time–frequency analysis of sub-ship target 1. (<b>d</b>) Separation imaging results based on time–frequency analysis of sub-ship target 2. (<b>e</b>) Separation imaging results of sub-ship target 1 of the method proposed in this paper. (<b>f</b>) Separation imaging results of sub-ship target 2 of the method proposed in this paper.</p>
Full article ">Figure 15
<p>Separation imaging results based on Keystone transform. (<b>a</b>) Sub-ship target 1 imaging results. (<b>b</b>) Sub-ship target 2 imaging results.</p>
Full article ">Figure 16
<p>Time consumption of image domain separation, time–frequency analysis, and multi-domain association.</p>
Full article ">
26 pages, 8949 KiB  
Article
Numerical Modeling of Land Surface Temperature over Complex Geologic Terrains: A Remote Sensing Approach
by Saeid Asadzadeh and Carlos Roberto Souza Filho
Remote Sens. 2023, 15(19), 4877; https://doi.org/10.3390/rs15194877 - 8 Oct 2023
Cited by 3 | Viewed by 2224
Abstract
A physically-based image processing approach, based on a single-source surface energy balance framework, is developed here to model the land surface temperature (LST) over complex/rugged geologic terrains at medium to high spatial resolution (<102 m). This approach combines atmospheric parameters with a [...] Read more.
A physically-based image processing approach, based on a single-source surface energy balance framework, is developed here to model the land surface temperature (LST) over complex/rugged geologic terrains at medium to high spatial resolution (<102 m). This approach combines atmospheric parameters with a bulk-layer soil model and remote-sensing-based parameterization schemes to simulate surface temperature over bare surfaces. The model’s inputs comprise a digital elevation model, surface temperature data, and a set of land surface parameters including albedo, emissivity, roughness length, thermal conductivity, soil porosity, and soil moisture content, which are adjusted for elevation, solar time, and moisture contents when necessary. High-quality weather data were acquired from a nearby weather station. By solving the energy balance, heat, and water flow equations per pixel and subsequently integrating the surface and subsurface energy fluxes over time, a model-simulated temperature map/dataset is generated. The resulting map can then be contrasted with concurrent remote sensing LST (typically nighttime) data aiming to remove the diurnal effects and constrain the contribution of the subsurface heating component. The model’s performance and sensitivity were assessed across two distinct test sites in China and Iran, using point-scale observational data and regional-scale ASTER imagery, respectively. The model, known as the Surface Kinetic Temperature Simulator (SkinTES), has direct applications in resource exploration and geological studies in arid to semi-arid regions of the world. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

Figure 1
<p>Simplified workflow of the SKinTES land surface model for simulating surface temperature patterns (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math>) in a per-pixel fashion. The acronyms are <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">α</mo> </mrow> <mrow> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math>: surface albedo; <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">ε</mo> </mrow> <mrow> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math>: surface emissivity; <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">β</mo> </mrow> <mrow> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math>: topographic slope; <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">β</mo> </mrow> <mrow> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math>: topographic aspect; <math display="inline"><semantics> <mrow> <mo mathvariant="sans-serif-italic">θ</mo> </mrow> </semantics></math>: soil moisture content; <math display="inline"><semantics> <mrow> <mo mathvariant="sans-serif-italic">κ</mo> <mo>(</mo> <mo mathvariant="sans-serif-italic">θ</mo> <mo>)</mo> </mrow> </semantics></math>: thermal diffusivity; <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>(</mo> <mo mathvariant="sans-serif-italic">θ</mo> <mo>)</mo> </mrow> </semantics></math>: thermal conductivity; <math display="inline"><semantics> <mrow> <mi>T</mi> </mrow> </semantics></math>: soil temperature profile; <math display="inline"><semantics> <mrow> <mi>t</mi> </mrow> </semantics></math>: time; and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>t</mi> </mrow> <mrow> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math>: overall simulation time.</p>
Full article ">Figure 2
<p>Comparison between model-simulated and observed land surface temperatures at the Tongyu cropland station for the period from 23 to 30 April 2005. The temperature difference at the last time point is 0.98 °C.</p>
Full article ">Figure 3
<p>Time series of observed and simulated surface fluxes at the Tongyu cropland station for the period from 23 to 30 April 2005. (<b>a</b>) Net radiation flux (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>), (<b>b</b>) sensible heat flux (<math display="inline"><semantics> <mrow> <mi>H</mi> </mrow> </semantics></math>), (<b>c</b>) latent heat flux (<math display="inline"><semantics> <mrow> <mo mathvariant="sans-serif-italic">λ</mo> <mi>E</mi> </mrow> </semantics></math>), and (<b>d</b>) ground heat flux (<math display="inline"><semantics> <mrow> <mi>G</mi> </mrow> </semantics></math>). The simulated sensible heat flux (<a href="#remotesensing-15-04877-f003" class="html-fig">Figure 3</a>b) is highly correlated with the observed values (R<sup>2</sup> = 80), with MBD and RMSE values recorded at 12.5 and 49.6 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">W</mi> <mo>⋅</mo> <msup> <mrow> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>, respectively. The largest disparities are observed during the nights of April 28th and 30th. The latent heat flux (<a href="#remotesensing-15-04877-f003" class="html-fig">Figure 3</a>c) has the lowest simulation error, with RMSE and MBD values of 20.4 and 1.73 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">W</mi> <mo>⋅</mo> <msup> <mrow> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>, respectively, even though only 63% of the variance is explained. On the other hand, the simulated ground heat flux agrees rather well with the observations during the day but tends to be underestimated at night, with RMSE up to 59.6, MBD of <math display="inline"><semantics> <mrow> <mo>−</mo> </mrow> </semantics></math>12.7 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">W</mi> <msup> <mrow> <mo>⋅</mo> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>, and R<sup>2</sup> of 0.66. Crucially, the sum of sensible, latent, and ground heat fluxes stays within <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>3.3</mn> <mo mathvariant="normal">%</mo> </mrow> </semantics></math> of the net radiation (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>), implying that the energy balance is well preserved in the model.</p>
Full article ">Figure 4
<p>The model-simulated versus measured volumetric soil moisture content at 5 cm depth at the Tongyu station for the period from 23 to 30 April 2005.</p>
Full article ">Figure 5
<p>Impact of space (<math display="inline"><semantics> <mrow> <mo mathvariant="sans-serif">Δ</mo> <mi>z</mi> </mrow> </semantics></math>) and time (<math display="inline"><semantics> <mrow> <mo mathvariant="sans-serif">Δ</mo> <mi>t</mi> </mrow> </semantics></math>) step sizes on model performance calculated by using the Tongyu datasets.</p>
Full article ">Figure 6
<p>Relative error of the simulated surface temperature at the last time point as a function of the spin-up period calculated relative to the spin-up period of 13.5 days.</p>
Full article ">Figure 7
<p>Sensitivity of the SKinTES model to its input parameters. (<b>a</b>) Surface temperature yielded at the last time step corresponding to 10:30 p.m. on 30 April 2005. (<b>b</b>) Diurnal surface temperature pattern. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mo>+</mo> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mo>−</mo> </mrow> </msub> </mrow> </semantics></math> denote changes in the model outputs due to increased and decreased input parameters, assessed in comparison to the base output using the absolute temperature difference (<math display="inline"><semantics> <mrow> <mo mathvariant="sans-serif">Δ</mo> <mi>T</mi> </mrow> </semantics></math>) and RMSE metrics, respectively. The abbreviations are, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math>: air temperature; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math>: air pressure; <math display="inline"><semantics> <mrow> <msub> <mrow> <mo>υ</mo> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>: wind speed; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>H</mi> </mrow> </msub> </mrow> </semantics></math>: relative humidity; <math display="inline"><semantics> <mrow> <mo mathvariant="sans-serif-italic">θ</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">θ</mo> </mrow> <mrow> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">θ</mo> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>: actual, saturated, and residual volumetric soil water contents; <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">θ</mo> </mrow> <mrow> <mo mathvariant="sans-serif-italic">∞</mo> </mrow> </msub> </mrow> </semantics></math>: soil moisture content at depth; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mo mathvariant="sans-serif-italic">θ</mo> </mrow> </msub> </mrow> </semantics></math>: soil water diffusivity; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>: soil hydraulic conductivity; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mi>d</mi> <mi>r</mi> <mi>y</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>K</mi> </mrow> <mrow> <mi>s</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>: dry and saturated thermal heat conductivities; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>: volumetric thermal heat capacity of solids; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">Z</mi> </mrow> <mrow> <mn>0</mn> <mi mathvariant="normal">m</mi> </mrow> </msub> </mrow> </semantics></math>: surface roughness length for momentum; <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">α</mo> </mrow> <mrow> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math>: surface albedo; <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">ε</mo> </mrow> <mrow> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math>: surface emissivity; <math display="inline"><semantics> <mrow> <msub> <mrow> <mo mathvariant="sans-serif-italic">ω</mo> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>b</mi> </mrow> </semantics></math>: empirical parameters related to surface conductance; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>h</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>: thickness of the topsoil layer.</p>
Full article ">Figure 8
<p>(<b>a</b>) Shaded-relief digital elevation model of the Qom study area. (<b>b</b>) Generalized geologic map of the area displayed on ASTER-VNIR color composite imagery. The abbreviated geologic units are reddish sandstones, siltstones, and conglomerates (SS), marly sandstones (mSS), limestones (LS), marls and bituminous shales (SH), and Quaternary alluvium and Pliocene conglomerate (Qal). (<b>c</b>) ASTER nighttime kinetic temperature data from 15 August 2020, superimposed on the shaded relief image. The data are atmospherically corrected using the GDAS model and filtered to remove the stripping noise using FFT. (<b>d</b>) Simulated land surface temperature data concurrent to the image data in (<b>c</b>) calculated using an air temperature lapse rate of 0.0075 <math display="inline"><semantics> <mrow> <msup> <mrow> <mo>°</mo> <mi mathvariant="normal">C</mi> <mo>⋅</mo> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>. The RMSE, MBD, R<sup>2</sup>, and slope of the regression line between the datasets of <a href="#remotesensing-15-04877-f008" class="html-fig">Figure 8</a>c,d are equal to 1.74 °C, <math display="inline"><semantics> <mrow> <mo>−</mo> </mrow> </semantics></math>0.35 °C, 0.19, and 0.9997, respectively. U<sub>i</sub> and O represent areas wherein the temperature is under- and overestimated, respectively.</p>
Full article ">Figure 9
<p>The effects of cloudiness on downwelling longwave radiation (<math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>a</mi> <mi>C</mi> </mrow> </msub> </mrow> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math>) for different cloud types.</p>
Full article ">
26 pages, 2173 KiB  
Article
Uvsq-Sat NG, a New CubeSat Pathfinder for Monitoring Earth Outgoing Energy and Greenhouse Gases
by Mustapha Meftah, Cannelle Clavier, Alain Sarkissian, Alain Hauchecorne, Slimane Bekki, Franck Lefèvre, Patrick Galopeau, Pierre-Richard Dahoo, Andrea Pazmino, André-Jean Vieau, Christophe Dufour, Pierre Maso, Nicolas Caignard, Frédéric Ferreira, Pierre Gilbert, Odile Hembise Fanton d’Andon, Sandrine Mathieu, Antoine Mangin, Catherine Billard and Philippe Keckhut
Remote Sens. 2023, 15(19), 4876; https://doi.org/10.3390/rs15194876 - 8 Oct 2023
Cited by 8 | Viewed by 3151
Abstract
Climate change is undeniably one of the most pressing and critical challenges facing humanity in the 21st century. In this context, monitoring the Earth’s Energy Imbalance (EEI) is fundamental in conjunction with greenhouse gases (GHGs) in order to comprehensively understand and address climate [...] Read more.
Climate change is undeniably one of the most pressing and critical challenges facing humanity in the 21st century. In this context, monitoring the Earth’s Energy Imbalance (EEI) is fundamental in conjunction with greenhouse gases (GHGs) in order to comprehensively understand and address climate change. The French Uvsq-Sat NG pathfinder mission addresses this issue through the implementation of a Six-Unit CubeSat, which has dimensions of 111.3 × 36.6 × 38.8 cm in its unstowed configuration. Uvsq-Sat NG is a satellite mission spearheaded by the Laboratoire Atmosphères, Observations Spatiales (LATMOS), and supported by the International Satellite Program in Research and Education (INSPIRE). The launch of this mission is planned for 2025. One of the Uvsq-Sat NG objectives is to ensure the smooth continuity of the Earth Radiation Budget (ERB) initiated via the Uvsq-Sat and Inspire-Sat satellites. Uvsq-Sat NG seeks to achieve broadband ERB measurements using state-of-the-art yet straightforward technologies. Another goal of the Uvsq-Sat NG mission is to conduct precise and comprehensive monitoring of atmospheric gas concentrations (CO2 and CH4) on a global scale and to investigate its correlation with Earth’s Outgoing Longwave Radiation (OLR). Uvsq-Sat NG carries several payloads, including Earth Radiative Sensors (ERSs) for monitoring incoming solar radiation and outgoing terrestrial radiation. A Near-Infrared (NIR) Spectrometer is onboard to assess GHGs’ atmospheric concentrations through observations in the wavelength range of 1200 to 2000 nm. Uvsq-Sat NG also includes a high-definition camera (NanoCam) designed to capture images of the Earth in the visible range. The NanoCam will facilitate data post-processing acquired via the spectrometer by ensuring accurate geolocation of the observed scenes. It will also offer the capability of observing the Earth’s limb, thus providing the opportunity to roughly estimate the vertical temperature profile of the atmosphere. We present here the scientific objectives of the Uvsq-Sat NG mission, along with a comprehensive overview of the CubeSat platform’s concepts and payload properties as well as the mission’s current status. Furthermore, we also describe a method for the retrieval of atmospheric gas columns (CO2, CH4, O2, H2O) from the Uvsq-Sat NG NIR Spectrometer data. The retrieval is based on spectra simulated for a range of environmental conditions (surface pressure, surface reflectance, vertical temperature profile, mixing ratios of primary gases, water vapor, other trace gases, cloud and aerosol optical depth distributions) as well as spectrometer characteristics (Signal-to-Noise Ratio (SNR) and spectral resolution from 1 to 6 nm). Full article
Show Figures

Figure 1

Figure 1
<p>Computer-aided design of the Uvsq-Sat NG satellite with its platform and its scientific payloads (NIR Spectrometer, NanoCam, Earth Radiative Sensors, photodiodes).</p>
Full article ">Figure 2
<p>General ADCS control loop of the Uvsq-Sat NG CubeSat. Earth-Centered Earth-Fixed (ECEF) is a coordinate system that defines positions in relation to the Earth’s center while considering the Earth’s rotation. It provides a fixed reference frame for navigation and positioning calculations.</p>
Full article ">Figure 3
<p>Schematic illustration of radiation transfer in passive remote sensing, applied to Uvsq-Sat NG. The observation approach of the Uvsq-Sat NG NIR Spectrometer relies on measuring spectra of sunlight scattered back via the Earth’s surface and atmosphere in the NIR spectral range.</p>
Full article ">Figure 4
<p>(<b>top</b>) Dimensionless transmittance functions (<math display="inline"><semantics> <mrow> <mrow> <mi>TR</mi> </mrow> <mo>(</mo> <mi>ν</mi> <mo>,</mo> <mi>T</mi> <mo>,</mo> <mi>P</mi> <mo>,</mo> <mi>L</mi> <mo>)</mo> </mrow> </semantics></math>) for CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>, CH<math display="inline"><semantics> <msub> <mrow/> <mn>4</mn> </msub> </semantics></math>, O<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>, and H<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>O as a function of wavelength and for nominal mixing. (<b>bottom</b>) Dimensionless transmittance functions’ relative variations from a ’maximum’ mixing ratio (430 ppm for CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>, 2.0 ppm for CH<math display="inline"><semantics> <msub> <mrow/> <mn>4</mn> </msub> </semantics></math>, 0.22 for O<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>, and 1.1 precipitable cm for H<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>O) to a nominal mixing ratio.</p>
Full article ">Figure 5
<p>(<b>top</b>) Dimensionless transmittance functions at the Uvsq-Sat NG Spectrometer resolution assuming a spectral resolution of 1 nm (FWHM) for the spectrometer. A Gaussian convolution filter is used. (<b>bottom</b>) Relative variation between ’maximum’ mixing ratio and nominal mixing ratio.</p>
Full article ">Figure 6
<p>Uvsq-Sat NG NIR Spectrometer data retrieval of the dimensionless signal (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>TR</mi> </mrow> <mrow> <mi>S</mi> <mi>p</mi> <mi>e</mi> <mi>c</mi> <mi>t</mi> <mi>r</mi> <mi>o</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>λ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>) as it can be observed via the spectrometer (spectral resolution of 1 nm and SNR of 50). The dark stars (noised data) represent data of the dimensionless signal (Step 4). The blue curve represents the best fit obtained using the Levenberg–Marquardt algorithm (Step 5).</p>
Full article ">Figure 7
<p>Restitution of the concentration of gases observed via the Uvsq-Sat NG NIR Spectrometer. In this simulation, the spectrometer has a resolution of 1 nm and a SNR of 50.</p>
Full article ">Figure 8
<p>Restitution of the concentration of gases observed via the Uvsq-Sat NG NIR Spectrometer. In this simulation, the instrument has a resolution of 6 nm and a SNR of 2000.</p>
Full article ">
19 pages, 38475 KiB  
Article
Segmentation and Connectivity Reconstruction of Urban Rivers from Sentinel-2 Multi-Spectral Imagery by the WaterSCNet Deep Learning Model
by Zixuan Dui, Yongjian Huang, Mingquan Wang, Jiuping Jin and Qianrong Gu
Remote Sens. 2023, 15(19), 4875; https://doi.org/10.3390/rs15194875 - 8 Oct 2023
Cited by 1 | Viewed by 1733
Abstract
Quick and automatic detection of the distribution and connectivity of urban rivers and their changes from satellite imagery is of great importance for urban flood control, river management, and ecological conservation. By improving the E-UNet model, this study proposed a cascaded river segmentation [...] Read more.
Quick and automatic detection of the distribution and connectivity of urban rivers and their changes from satellite imagery is of great importance for urban flood control, river management, and ecological conservation. By improving the E-UNet model, this study proposed a cascaded river segmentation and connectivity reconstruction deep learning network model (WaterSCNet) to segment urban rivers from Sentinel-2 multi-spectral imagery and simultaneously reconstruct their connectivity obscured by road and bridge crossings from the segmentation results. The experimental results indicated that the WaterSCNet model could achieve better river segmentation and connectivity reconstruction results compared to the E-UNet, U-Net, SegNet, and HRNet models. Compared with the classic U-Net model, the MCC, F1, Kappa, and Recall evaluation metrics of the river segmentation results of the WaterSCNet model were improved by 3.24%, 3.10%, 3.36%, and 3.93%, respectively, and the evaluation metrics of the connectivity reconstruction results were improved by 4.25%, 4.11%, 4.37%, and 4.83%, respectively. The variance of the evaluation metrics of the five independent experiments indicated that the WaterSCNet model also had the best robustness compared to the other four models. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Top-level architecture of the WaterSCNet model.</p>
Full article ">Figure 2
<p>Architecture of the river segmentation subnetwork WaterSCNet-s.</p>
Full article ">Figure 3
<p>Schematic of the additive attention gate.</p>
Full article ">Figure 4
<p>Architecture of the river network connectivity reconstruction subnetwork: WaterSCNet-c.</p>
Full article ">Figure 5
<p>Sentinel-2 multi-spectral image of the seven cities, (<b>a</b>) Tokyo, (<b>b</b>) Shanghai, (<b>c</b>) Dongguan, (<b>d</b>) Guangzhou, (<b>e</b>) Hanoi, (<b>f</b>) Manila, (<b>g</b>) Sydney, (<b>h</b>) location of the seven cities (<a href="https://services.arcgisonline.com/arcgis/rest/services" target="_blank">https://services.arcgisonline.com/arcgis/rest/services</a>, accessed on 25 July 2023).</p>
Full article ">Figure 6
<p>Sentinel-2 data processing, labeling, and slicing flowchart.</p>
Full article ">Figure 7
<p>Examples of river segmentation labels and river connectivity reconstruction labels, (<b>a</b>) sentinel-2 images, (<b>b</b>) river segmentation labels, (<b>c</b>) river connectivity reconstruction labels.</p>
Full article ">Figure 8
<p>Schematic of the training process of the WaterSCNet model, a river segmentation and connectivity reconstruction network.</p>
Full article ">Figure 9
<p>Examples of experimental results for river segmentation (Exp_Seg), (<b>1</b>) medium and small urban rivers, (<b>2</b>) large and small urban rivers, (<b>3</b>) small urban rivers, (<b>4</b>) urban lakes, (<b>5</b>) large and small suburban rivers.</p>
Full article ">Figure 10
<p>Examples of experimental results for river connectivity reconstruction (Exp_Con), (<b>1</b>) medium and small urban rivers, (<b>2</b>) large and small urban rivers, (<b>3</b>) small urban rivers, (<b>4</b>) urban lakes, (<b>5</b>) large and small suburban rivers.</p>
Full article ">Figure 11
<p>Examples of rivers located within the shaded area of nearby tall buildings, and their segmentation and connectivity labels. (<b>1</b>–<b>4</b>) are four examples, the red boxes indicate the river segment located within the shadow of nearby tall buildings, the yellow boxes indicate the shadow of tall buildings.</p>
Full article ">Figure 12
<p>Segmentation results for the rivers located in the shaded area of nearby tall buildings in <a href="#remotesensing-15-04875-f011" class="html-fig">Figure 11</a>. (<b>1</b>–<b>4</b>) correspond to the four examples in <a href="#remotesensing-15-04875-f011" class="html-fig">Figure 11</a>, the red boxes indicate the same location as the corresponding red boxed areas in the Sentinel-2 images of <a href="#remotesensing-15-04875-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 13
<p>Connectivity reconstruction results for the rivers located in the shaded area of nearby tall buildings in <a href="#remotesensing-15-04875-f011" class="html-fig">Figure 11</a>. (<b>1</b>–<b>4</b>) correspond to the four examples in <a href="#remotesensing-15-04875-f011" class="html-fig">Figure 11</a>, the red boxes indicate the same location as the corresponding red boxed areas in the Sentinel-2 images of <a href="#remotesensing-15-04875-f011" class="html-fig">Figure 11</a>.</p>
Full article ">
24 pages, 27592 KiB  
Article
The Frinco Castle: From an Integrated Survey to 3D Modelling and a Stratigraphic Analysis for Helping Knowledge and Reconstruction
by Filippo Diara and Marco Roggero
Remote Sens. 2023, 15(19), 4874; https://doi.org/10.3390/rs15194874 - 8 Oct 2023
Cited by 2 | Viewed by 2360
Abstract
The Frinco Castle (AT-Italy) was the focus of a critical requalification and restoration project and historical knowledge. The initial medieval nucleus was modified and enriched by other architectural parts giving the current shape over the centuries. These additions gave the castle its actual [...] Read more.
The Frinco Castle (AT-Italy) was the focus of a critical requalification and restoration project and historical knowledge. The initial medieval nucleus was modified and enriched by other architectural parts giving the current shape over the centuries. These additions gave the castle its actual internal and external complexity and an extreme structural fragility: in 2014, a significant portion collapsed. The main objective of this work was to obtain 3D metric documentation and a historical interpretation of the castle for reconstruction and fruition purposes. The local administration has planned knowledge processes from 2021: an integrated 3D geodetic survey of the entire castle and stratigraphic investigations of masonries. Both surveys were essential for understanding the architectural composition as well as the historical evolution of the court. NURBS modelling and a stratigraphic analysis of masonries allowed for the implementation of 3D immersion related to the historical interpretation. Furthermore, this modelling choice was essential for virtually reconstructing the collapsed area and helping the restoration phase. Full article
Show Figures

Figure 1

Figure 1
<p>The main workflow of the project: from 3D survey to stratigraphic analysis and NURBS modelling of the Frinco Castle.</p>
Full article ">Figure 2
<p>Castles, towers, and fortified settlements in the Province of Asti. Frinco is northward from Asti (image from Conti F. 1980 [<a href="#B2-remotesensing-15-04874" class="html-bibr">2</a>]).</p>
Full article ">Figure 3
<p>Aerial image of the Frinco Castle from the south side, from which the collapsed area is visible.</p>
Full article ">Figure 4
<p>Photogrammetric process and 3D model inside the Agisoft Metashape software: aerial images acquired by DJI Zenmuse P1.</p>
Full article ">Figure 5
<p>Profiles’ extraction from the intersection between the point clouds and planar sections passing through X, Y, and Z planes.</p>
Full article ">Figure 6
<p>Stratigraphic analysis for historical architectures: from the unit’s detection and classification (USM-USR-EA) to the unit’s physical sequence by using the Harris Matrix. Units’ relations are related to no connections (A), an overlap (B), and equality (C).</p>
Full article ">Figure 7
<p>Stratigraphic units’ detection in the castle’s north front (small portion). Different masonries, openings, and other architectural and structural elements can be noticed.</p>
Full article ">Figure 8
<p>Synthesis of detected stratigraphic units: from masonry (USM) and render/plaster (USR) layers to architectural elements (EA).</p>
Full article ">Figure 9
<p>Part of the relational database for stratigraphic units’ classification and description, and the physical relations among them.</p>
Full article ">Figure 10
<p>Three of the nineteen stratigraphic diagrams produced for the analysis of the Frinco Castle: square tower—forepart in the south front (<b>A</b>); the central part of the west front (<b>B</b>); the area of the collapse in the south front (<b>C</b>).</p>
Full article ">Figure 11
<p>West side of the squared access tower: from the units’ detection to the stratigraphic analysis and interpretation via Harris’ Matrix for understanding the relative chronology of layers on masonries.</p>
Full article ">Figure 12
<p>Section of the central part of the west front of the castle: on the left side, the earlier masonry and decorations (Period I), while on the right side, the connected masonry and Ghibelline decorations related to Period III.</p>
Full article ">Figure 13
<p>Ghibelline dovetail comparisons: (<b>A</b>) Castello Molare (AL); (<b>B</b>) Torre Scarampa-Bertamenga (AT). Drawings of Fonio M.R., from Conti F., 1980.</p>
Full article ">Figure 14
<p>Part of the south front of the internal courtyard: stratigraphic units (USM, USR, and EA), analysis, and interpretation.</p>
Full article ">Figure 15
<p>From the stratigraphic analysis and interpretation via Harris’ Matrix to the digital representation on orthophotos: matrix and orthophoto of the north front (<b>A</b>); matrix and orthophoto of the collapsed area (<b>B</b>).</p>
Full article ">Figure 16
<p>Comparisons on bichrome pointed arches related to doors and windows (ogival): (<b>A</b>) Verasis-Asinari Palace; (<b>B</b>) Roero Monteu Tower. From Google Street View.</p>
Full article ">Figure 17
<p>Tessarolo castle (AL). The architectural schema (circular tower, loggia, squared tower, and internal courtyard) is similar to the west side of the Frinco castle. From Conti F., 1980.</p>
Full article ">Figure 18
<p>Summary of the detected chronological periods with characterizing elements and historical interpretations.</p>
Full article ">Figure 19
<p>NURBS modelling of the castle: surfaces and geometries modelled on wired profiles generated from planar sections’ extraction.</p>
Full article ">Figure 20
<p>NURBS model of the castle related to the actual situation: the collapsed area on the south side.</p>
Full article ">Figure 21
<p>Standard deviation analysis between point clouds and NURBS concerning the circular tower of the castle: (<b>A</b>) NURBS simplified model; (<b>B</b>) point clouds; (<b>C</b>) scalar fields.</p>
Full article ">Figure 22
<p>Textures and orthophotos mapped on the NURBS model of the castle related to the actual situation: the collapsed area on the south side.</p>
Full article ">Figure 23
<p>From the stratigraphic analysis to the immersive 3D model. Section of the west side: matrix and period of the central part of the west front (<b>a</b>,<b>b</b>); understanding of the cylindric tower (cylindric unwrap) (<b>c</b>); NURBS model with textures and understanding (<b>d</b>,<b>e</b>).</p>
Full article ">Figure 24
<p>3D NURBS model of the Frinco Castle coloured depending on historical interpretation carried out via stratigraphic diagrams. Details of the northwest and southeast of the castle.</p>
Full article ">Figure 25
<p>Three-dimensional NURBS model of the Frinco Castle with the stratigraphic analysis and interpretation: the legend shows the period sequences.</p>
Full article ">Figure 26
<p>The 3D-rendered model proposed for reconstructing the collapsed area: modelled with Rhinoceros and V-Ray. South front of the castle.</p>
Full article ">Figure 27
<p>The 3D-rendered model proposed for reconstructing the collapsed area: modelled with Rhinoceros and V-Ray. Isometric view of the south front of the castle.</p>
Full article ">Figure 28
<p>The Frinco Castle project: from the integrated survey to the NURBS modelling and metric analysis and from the stratigraphic analysis and the historical interpretation to the collapsed area reconstruction.</p>
Full article ">
16 pages, 3096 KiB  
Technical Note
Revealing the Potential of Deep Learning for Detecting Submarine Pipelines in Side-Scan Sonar Images: An Investigation of Pre-Training Datasets
by Xing Du, Yongfu Sun, Yupeng Song, Lifeng Dong and Xiaolong Zhao
Remote Sens. 2023, 15(19), 4873; https://doi.org/10.3390/rs15194873 - 8 Oct 2023
Cited by 4 | Viewed by 2050
Abstract
This study introduces a novel approach to the critical task of submarine pipeline or cable (POC) detection by employing GoogleNet for the automatic recognition of side-scan sonar (SSS) images. The traditional interpretation methods, heavily reliant on human interpretation, are replaced with a more [...] Read more.
This study introduces a novel approach to the critical task of submarine pipeline or cable (POC) detection by employing GoogleNet for the automatic recognition of side-scan sonar (SSS) images. The traditional interpretation methods, heavily reliant on human interpretation, are replaced with a more reliable deep-learning-based methodology. We explored the enhancement of model accuracy via transfer learning and scrutinized the influence of three distinct pre-training datasets on the model’s performance. The results indicate that GoogleNet facilitated effective identification, with accuracy and precision rates exceeding 90%. Furthermore, pre-training with the ImageNet dataset increased prediction accuracy by about 10% compared to the model without pre-training. The model’s prediction ability was best promoted by pre-training datasets in the following order: Marine-PULSE ≥ ImageNet > SeabedObjects-KLSG. Our study shows that pre-training dataset categories, dataset volume, and data consistency with predicted data are crucial factors affecting pre-training outcomes. These findings set the stage for future research on automatic pipeline detection using deep learning techniques and emphasize the significance of suitable pre-training dataset selection for CNN models. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing II)
Show Figures

Figure 1

Figure 1
<p>Structure of Inception [<a href="#B30-remotesensing-15-04873" class="html-bibr">30</a>].</p>
Full article ">Figure 2
<p>Samples from the Marine-PULSE dataset. Samples in rows (<b>a</b>–<b>d</b>) are pipelines or cables, underwater residual mounds, seabed surface, and engineering platforms, respectively.</p>
Full article ">Figure 3
<p>Flow chart of data division, experiment cases, and accuracy evaluation.</p>
Full article ">Figure 4
<p>Variation in prediction evaluation metrics of the model in the test dataset over 100 epochs. (<b>a</b>) Accuracy; (<b>b</b>) precision; (<b>c</b>) recall; (<b>d</b>) F1 score.</p>
Full article ">Figure 5
<p>The effect of transfer learning on the prediction accuracy of different CNN models on the test dataset. pI = pre-training with ImageNet dataset; np = no pre-training. (<b>a</b>–<b>d</b>) represent the accuracy, precision, recall, and F1 score of the model’s calculations with and without transfer learing, respectively.</p>
Full article ">Figure 6
<p>Variation in prediction evaluation metrics over 100 epochs in the test set using models with different training datasets. pI = pre-training with ImageNet dataset; pS = pre-training with SeabedObjects-KLSG dataset; pS = pre-training with train_B from Marine-PULSE dataset. (<b>a</b>–<b>d</b>) represent the accuracy, precision, recall, and F1 score of the model computation results using the pI, pS, and pY pretraining datasets, respectively.</p>
Full article ">Figure 7
<p>Statistics of prediction evaluation metrics in the test set using models different from the training dataset. pI = pre-training with ImageNet dataset; pS = pre-training with SeabedObjects-KLSG dataset; pS = pre-training with train_B from Marine-PULSE dataset. The last 50 epochs of the model predictions were used for statistical analysis. The red dots indicate the maximum values of the 50 sets of predicted results. (<b>a</b>–<b>d</b>) represent the statistical analysis of accuracy, precision, recall, and F1 score of the model computation results using the pI, pS, and pY pretraining datasets, respectively.</p>
Full article ">
21 pages, 11996 KiB  
Article
Construction and Analysis of the EMC Evaluation Model for Vehicular Communication Systems Based on Digital Maps
by Guangshuo Zhang, Hongmin Lu, Shiwei Zhang, Fulin Wu, Yangzhen Qin and Bo Jiang
Remote Sens. 2023, 15(19), 4872; https://doi.org/10.3390/rs15194872 - 8 Oct 2023
Viewed by 1333
Abstract
With the development of vehicular communication technology, the electromagnetic compatibility requirements of vehicular communication systems are becoming more demanding. The traditional four-level electromagnetic compatibility evaluation model is widely applied in many scenarios. However, this model neglects the mutual interference of electronic devices inside [...] Read more.
With the development of vehicular communication technology, the electromagnetic compatibility requirements of vehicular communication systems are becoming more demanding. The traditional four-level electromagnetic compatibility evaluation model is widely applied in many scenarios. However, this model neglects the mutual interference of electronic devices inside a vehicle, and it cannot evaluate whether reduced radio receiver sensitivity, antenna isolation, and communication distance satisfy the system requirements for vehicular communication, thus making it unsuitable for digital communication systems. With the development of remote sensing technology, high-precision digital maps are easy to acquire and thus widely used. In this work, a modified five-level evaluation model based on digital maps is proposed, where digital maps are employed to support receiver sensitivity, antenna isolation, and communication performance evaluation. Through remote sensing technology and digital maps, a terrain profile is obtained, and a more accurate vehicle communication propagation model is established. In the experiment, an actual armored vehicular communication system example is applied to verify the performance of the proposed five-level evaluation model. Compared with the free-space propagation model, the error of the actual power received by the receiver is reduced by 0.97%, and the error of the communication distance where the sensitivity of the receiver is reduced by more than the system EMC threshold is reduced by 16.78%. The calculated antenna isolation degree is basically consistent with the actual measurement data. The model is able to evaluate the electromagnetic compatibility of an armored vehicular communication system more quickly, accurately, and comprehensively compared to previous evaluation models. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed evaluation model.</p>
Full article ">Figure 2
<p>Vehicle-communication-propagation-model-based graphical depiction of terms used in the transmission loss.</p>
Full article ">Figure 3
<p>Satellite map of the area between the two vehicles.</p>
Full article ">Figure 4
<p>Contour map of the actual communication path.</p>
Full article ">Figure 5
<p>Topographic profile of the actual communication path.</p>
Full article ">Figure 6
<p>Model diagram of the actual communication path.</p>
Full article ">Figure 7
<p>Comparison of propagation loss under three conditions.</p>
Full article ">Figure 8
<p>Actual power received by receiver VHF3.</p>
Full article ">Figure 9
<p>Sensitivity reduction of receiver VHF3.</p>
Full article ">Figure 10
<p>Comparison of the results from the calculation and measurement of antenna isolation. (<b>a</b>) HF1 and VHF3; (<b>b</b>) VHF2 and VHF3.</p>
Full article ">Figure 11
<p>Relationship between the receiver VHF3 sensitivity reduction and the communication distance reduction.</p>
Full article ">Figure 12
<p>Simulation model of the communication link.</p>
Full article ">Figure 13
<p>Relationship between BER and SNR for seven modulation types.</p>
Full article ">Figure 14
<p>Relationship between BER and SNR for seven channel-coding modes.</p>
Full article ">
21 pages, 4281 KiB  
Article
Insighting Drivers of Population Exposure to Ambient Ozone (O3) Concentrations across China Using a Spatiotemporal Causal Inference Method
by Junming Li, Jing Xue, Jing Wei, Zhoupeng Ren, Yiming Yu, Huize An, Xingyan Yang and Yixue Yang
Remote Sens. 2023, 15(19), 4871; https://doi.org/10.3390/rs15194871 - 8 Oct 2023
Viewed by 1590
Abstract
Ground-level ozone (O3) is a well-known atmospheric pollutant aside from particulate matter. China as a global populous country is facing serious surface O3 pollution. To detect the complex spatiotemporal transformation of the population exposure to ambient O3 pollution [...] Read more.
Ground-level ozone (O3) is a well-known atmospheric pollutant aside from particulate matter. China as a global populous country is facing serious surface O3 pollution. To detect the complex spatiotemporal transformation of the population exposure to ambient O3 pollution in China from 2005 to 2019, the Bayesian multi-stage spatiotemporal evolution hierarchy model was employed. To insight the drivers of the population exposure to ambient O3 pollution in China, a Bayesian spatiotemporal LASSO regression model (BST-LASSO-RM) and a spatiotemporal propensity score matching (STPSM) were firstly applied; then, a spatiotemporal causal inference method integrating the BST-LASSO-RM and STPSM was presented. The results show that the spatial pattern of the annual population-weighted ground-level O3 (PWGLO3) concentrations, representing population exposure to ambient O3, in China has transformed since 2014. Most regions (72.2%) experienced a decreasing trend in PWGLO3 pollution in the early stage, but in the late stage, most areas (79.3%) underwent an increasing trend. Some drivers on PWGLO3 concentrations have partial spatial spillover effects. The PWGLO3 concentrations in a region can be driven by this region’s surrounding areas’ economic factors, wind speed, and PWGLO3 concentrations. The major drivers with six local factors in 2005–2014 changed to five local factors and one spatial adjacent factor in 2015–2019. The driving of the traffic and green factors have no spatial spillover effects. Three traffic factors showed a negative driving effect in the early stage, but only one, bus ridership per capita (BRPC), retains the negative driving effect in the late stage. The factor with the maximum driving contribution is BRPC in the early stage, but PM2.5 pollution in the late stage, and the corresponding driving contribution is 17.57%. Green area per capita and urban green coverage rates have positive driving effects. The driving effects of the climate factors intensified from the early to the later stage. Full article
(This article belongs to the Special Issue Remote Sensing of Aerosols, Planetary Boundary Layer, and Clouds)
Show Figures

Figure 1

Figure 1
<p>The schematic diagram between outcome variable (dependent variable) and determinants and its proxy variables.</p>
Full article ">Figure 2
<p>Flow diagram of the spatiotemporal causal inference framework.</p>
Full article ">Figure 3
<p>The overall spatial pattern of the <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>W</mi> <mi>G</mi> <mi>L</mi> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> concentrations in the early stage (<b>A</b>) and late-stage (<b>B</b>) and the transformation (<b>C</b>) of the composition of the five classes of <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>W</mi> <mi>G</mi> <mi>L</mi> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> levels between the two stages.</p>
Full article ">Figure 4
<p>The scatters and Bayesian multi-stage spatiotemporal evolution hierarchy model fitted polylines of the annual population-weighted ozone (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math>) concentrations in the example of 12 prefecture-level regions in China from 2005 to 2019.</p>
Full article ">Figure 5
<p>The local annual change of the <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>W</mi> <mi>G</mi> <mi>L</mi> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> concentrations in China at the sub-provincial scale in the early stage.</p>
Full article ">Figure 6
<p>The local annual change in the <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>W</mi> <mi>G</mi> <mi>L</mi> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> concentrations of China at a sub-provincial scale in the late stage.</p>
Full article ">Figure 7
<p>Normalised regression results of the Bayesian LASSO regression model between the <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>W</mi> <mi>G</mi> <mi>L</mi> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> concentrations and the significant driving factors in the two stages.</p>
Full article ">
23 pages, 20473 KiB  
Article
Application of Sparse Regularization in Spherical Radial Basis Functions-Based Regional Geoid Modeling in Colorado
by Haipeng Yu, Guobin Chang, Shubi Zhang, Yuhua Zhu and Yajie Yu
Remote Sens. 2023, 15(19), 4870; https://doi.org/10.3390/rs15194870 - 8 Oct 2023
Cited by 4 | Viewed by 1541
Abstract
Spherical radial basis function (SRBF) is an effective method for calculating regional gravity field models. Calculating gravity field models with high accuracy and resolution requires dense basis functions, resulting in complex models. This study investigated the application of sparse regularization in SRBFs-based regional [...] Read more.
Spherical radial basis function (SRBF) is an effective method for calculating regional gravity field models. Calculating gravity field models with high accuracy and resolution requires dense basis functions, resulting in complex models. This study investigated the application of sparse regularization in SRBFs-based regional gravity field modeling. L1-norm regularization, also known as the least absolute shrinkage selection operator (LASSO), was employed in the parameter estimation procedure. LASSO differs from L2-norm regularization in that the solution obtained by LASSO is sparse, specifically with a portion of the parameters being zero. A sparse model would be advantageous for improving the numerical efficiency by reducing the number of SRBFs. The optimization problem of the LASSO was solved using the fast iterative shrinkage threshold algorithm, which is known for its high efficiency. The regularization parameter was selected using the Akaike information criterion. It was specifically tailored to the L1-norm regularization problem. An approximate covariance matrix of the estimated parameters in the sparse solution was analytically constructed from a Bayesian viewpoint. Based on the remove–compute–restore technique, a regional geoid model of Colorado (USA) was calculated. The numerical results suggest that the LASSO adopted in this study provided competitive results compared to Tikhonov regularization; however, the number of basis functions in the final model was less than 25% of the Tikhonov regularization. Without significantly reducing model accuracy, the LASSO solution provides a very simple model. This is the first study to apply the LASSO to SRBFs-based modeling of the regional gravity field in real gravity observation data. Full article
(This article belongs to the Section Earth Observation Data)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Topographic heights of the study area, and GSVS17 (222 points of the purple line); (<b>b</b>) original terrestrial (green points) and airborne (blue flight tracks) gravity datasets; target area <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi mathvariant="normal">T</mi> </msub> </mrow> </semantics></math> (yellow rectangle), data area <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi mathvariant="normal">D</mi> </msub> </mrow> </semantics></math> (black rectangle), parameterization area <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi mathvariant="normal">P</mi> </msub> </mrow> </semantics></math>, and SRBF nodes (red points).</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>b</b>) The terrestrial (residual) gravity anomaly observations; (<b>c</b>,<b>d</b>) the airborne (residual) gravity disturbance observations.</p>
Full article ">Figure 3
<p>The model residuals of the terrestrial gravity data ((<b>a</b>) LASSO and (<b>c</b>) Tikhonov regularization) and the airborne gravity data ((<b>b</b>) LASSO, and (<b>d</b>) Tikhonov regularization).</p>
Full article ">Figure 4
<p>Distribution histograms of the estimated parameters <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">β</mi> <mo stretchy="false">^</mo> </mover> </semantics></math>. (<b>a</b>) LASSO, (<b>b</b>) Tikhonov regularization.</p>
Full article ">Figure 5
<p>Estimated parameters ((<b>a</b>) LASSO and (<b>c</b>) Tikhonov regularization) and their standard deviations ((<b>b</b>) LASSO and (<b>d</b>) Tikhonov regularization).</p>
Full article ">Figure 6
<p>(<b>a</b>) The geometric and gravimetric geoid heights at the GSVS17 benchmarks; (<b>b</b>) differences between the geometric and gravimetric geoid heights.</p>
Full article ">Figure 7
<p>Variograms of the differences between geometric and gravimetric geoid heights at the GSVS17 benchmarks. (<b>a</b>) GSVS17 between marks 1 and 222, (<b>b</b>) GSVS17 between marks 1 and 160.</p>
Full article ">Figure 8
<p>Quasigeoid models of the whole study area, with a grid resolution of <math display="inline"><semantics> <mrow> <msup> <mn>1</mn> <mo>′</mo> </msup> <mo>×</mo> <msup> <mn>1</mn> <mo>′</mo> </msup> </mrow> </semantics></math> ((<b>a</b>) the LASSO solution and (<b>c</b>) the Tikhonov regularization solution) and their standard deviations ((<b>b</b>) the LASSO solution and (<b>d</b>) the Tikhonov regularization solution).</p>
Full article ">Figure 9
<p>Quasigeoid models differences. (<b>a</b>) The differences between the LASSO solution and group mean; (<b>b</b>) the differences between the Tikhonov regularization solution and the group mean; (<b>c</b>) the differences between the LASSO and Tikhonov regularization solutions.</p>
Full article ">Figure 10
<p>The standard deviations of the quasigeoid model, calculated using Fan’s method.</p>
Full article ">
24 pages, 13285 KiB  
Article
Study on the Regeneration Probability of Understory Coniferous Saplings in the Liangshui Nature Reserve Based on Four Modeling Techniques
by Haiping Zhao, Yuman Sun, Weiwei Jia, Fan Wang, Zipeng Zhao and Simin Wu
Remote Sens. 2023, 15(19), 4869; https://doi.org/10.3390/rs15194869 - 8 Oct 2023
Cited by 1 | Viewed by 1671
Abstract
Forests are one of the most important natural resources for humans, and understanding the regeneration probability of undergrowth in forests is very important for future forest spatial structure and forest management. In addition, the regeneration of understory saplings is a key process in [...] Read more.
Forests are one of the most important natural resources for humans, and understanding the regeneration probability of undergrowth in forests is very important for future forest spatial structure and forest management. In addition, the regeneration of understory saplings is a key process in the restoration of forest ecosystems. By studying the probability of sapling regeneration in forests, we can understand the impact of different stand factors and environmental factors on sapling regeneration. This could help provide a scientific basis for the restoration and protection of forest ecosystems. The Liangshui Nature Reserve of Yichun City, Heilongjiang Province, is a coniferous and broadleaved mixed forest. In this study, we assess the regeneration probability of coniferous saplings (CRP) in natural forests in 665 temporary plots in the Liangshui Nature Reserve. Using Sentinel-1 and Sentinel-2 images provided by the European Space Agency, as well as digital elevation model (DEM) data, we calculated the vegetation index, microwave vegetation index (RVI S1), VV, VH, texture features, slope, and DEM and combined them with field survey data to construct a logistic regression (LR) model, geographically weighted logistic regression (GWLR) model, random forest (RF) model, and multilayer perceptron (MLP) model to predict and analyze the CRP value of each pixel in the study area. The accuracy of the models was evaluated with the average values of the area under the ROC curve (AUC), kappa coefficient (KAPPA), root mean square error (RMSE), and mean absolute error (MAE) verified by five-fold cross-validation. The results showed that the RF model had the highest accuracy. The variable factor with the greatest impact on CRP was the DEM. The construction of the GWLR model considered more spatial factors and had a lower residual Moran index value. The four models had higher CRP prediction results in the low-latitude and low-longitude regions of the study area, and in the high-latitude and high-longitude regions of the study area, most pixels had a CRP value of 0 (i.e., no coniferous sapling regeneration occurred). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Distribution diagram of the study area and sample plots. (<b>a</b>) The specific location of the Liangshui Nature Reserve in China and (<b>b</b>) the distribution of ground sample plots in subcompartments.</p>
Full article ">Figure 2
<p>Technical route. “*18” represents that we extracted 18 Sentinel 2 vegetation indices.</p>
Full article ">Figure 3
<p>CRP distribution rule analysis. The curve shows the change trend of the CRP at different longitudes and latitudes under the same stand type. The bar chart shows the average CRP distribution along the longitudinal and latitudinal directions under the same stand type.</p>
Full article ">Figure 4
<p>Schematic diagram of the RF model.</p>
Full article ">Figure 5
<p>Schematic diagram of the MLP model. O represents the neuron, W represents the weight, and the broken line represents the variation in MAE with the number of iterations (EPOCH). 579: we return the model weights for the 579th iteration.</p>
Full article ">Figure 6
<p>K-fold cross-validation diagram.</p>
Full article ">Figure 7
<p>Spatial distribution of GWLR variable coefficients.</p>
Full article ">Figure 8
<p>Ranking of importance of variables. Due to the high number of variable factors, they were divided into two columns.</p>
Full article ">Figure 9
<p>Determination of optimal threshold segmentation points for all samples. The points in the graphs are the ROC curve coordinates corresponding to the optimal segmentation thresholds.</p>
Full article ">Figure 10
<p>Spatial statistics of model prediction results. The large image shows the distribution trend of the CRP in the study area; the small image shows the segmentation results based on the optimal threshold; 0—no coniferous sapling regeneration; 1—coniferous sapling regeneration.</p>
Full article ">Figure 11
<p>Moran’s I coordination curves. The scatter plots in the figure represent the sample locations, and the curves were fitted based on them.</p>
Full article ">Figure 12
<p>Kappa values corresponding to different segmentation thresholds.</p>
Full article ">
16 pages, 10728 KiB  
Technical Note
Revealing the Kinematic Characteristics and Tectonic Implications of a Buried Fault through the Joint Inversion of GPS and Strong-Motion Data: The Case of the 2022 Mw7.0 Taiwan Earthquake
by Chuanchao Huang, Chaodi Xie, Guohong Zhang, Wan Wang, Min-Chien Tsai and Jyr-Ching Hu
Remote Sens. 2023, 15(19), 4868; https://doi.org/10.3390/rs15194868 - 8 Oct 2023
Viewed by 1766
Abstract
Understanding the kinematic characteristics of the Longitudinal Valley Fault Zone (LVFZ) can help us to better understand the evolution of orogens. The 2022 Mw7.0 Taitung earthquake that occurred in Taiwan provides us with a good opportunity to understand the motion characteristics of the [...] Read more.
Understanding the kinematic characteristics of the Longitudinal Valley Fault Zone (LVFZ) can help us to better understand the evolution of orogens. The 2022 Mw7.0 Taitung earthquake that occurred in Taiwan provides us with a good opportunity to understand the motion characteristics of the Central Range Fault (CRF) and the strain partitioning pattern within the Longitudinal Valley Fault (LVF). We obtained the coseismic displacement and slip distribution of the 2022 Taiwan earthquake based on the strong-motion and GPS data available. The causative fault of this earthquake is the west-dipping Central Range Fault, which is buried beneath the western boundary of the LVF. The coseismic displacement field exhibits a quadrant distribution pattern, indicating a left-lateral strike-slip mechanism with a maximum displacement exceeding 1.25 m. The joint inversion results show that the size of the main asperity is 40 km × 20 km, and the maximum slip amount of 2.6 m is located at a depth of 10 km, equivalent to an earthquake of Mw7.04. The LVFZ is composed of LVF and CRF, which accommodates nearly half of the oblique convergence rate between the Philippine Sea Plate and the Eurasian Plate. There is a phenomenon of strain partitioning in the southern segment of the Longitudinal Valley Fault Zone. The Central Mountain Range Fault is primarily responsible for accommodating strike-slip motion, while the Longitudinal Valley Fault is mainly responsible for accommodating thrust motion. Full article
(This article belongs to the Special Issue Monitoring Subtle Ground Deformation of Geohazards from Space)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Map showing the main tectonic elements in Taiwan. (<b>b</b>) For the area marked by the red rectangle in (<b>a</b>), the red pentagram represents the Mw7.0 main shock that occurred on 18 September 2022, and the blue pentagram represents the M6.6 foreshock that occurred on 17 September 2022. The gray circle represents the aftershocks of the earthquake sequence in 2022. The blue circle represents historical earthquakes. The black beach ball represents the focal mechanism of earthquakes. The black arrow shows the convergence rate between the Eurasian Plate and the Pacific Plate [<a href="#B18-remotesensing-15-04868" class="html-bibr">18</a>]. CRF: Central Range fault, LVF: Longitudinal Valley Fault.</p>
Full article ">Figure 2
<p>Baseline correction of strong-motion data for the station G020. (<b>a</b>) Uncorrected acceleration waveform, (<b>b</b>) velocity waveform with dotted line indicating uncorrected and solid line indicating baseline-corrected data, and (<b>c</b>) the displacement waveform with dotted line indicating uncorrected and solid line indicating baseline-corrected data.</p>
Full article ">Figure 3
<p>Comparison between static displacements of GPS data and the strong-motion data after baseline correction. Every three pairs of bars in the figure represent a comparison between a very close pair of strong-motion stations and GPS stations. The pink color represents the static displacement of the strong-motion data, and the green color represents the static displacement of the GPS data.</p>
Full article ">Figure 4
<p>Coseismic displacement field of the ground surface. (<b>a</b>) The coseismic displacement field in the horizontal direction, and (<b>b</b>) the coseismic displacement field in the vertical direction. The red arrow represents the displacements that were obtained based on the strong-motion data, and the blue arrow represents the calculated displacement based on the GPS data. The black line represents the Longitudinal Valley Fault zone. The green line represent the fault model used to invert.</p>
Full article ">Figure 5
<p>(<b>a</b>) The inclination angle with west-dipping faults represented on the horizontal axis, where 90 represents an inclination angle of 90°, less than 90° represents a west-dipping fault, and greater than 90° represents an east-dipping fault. The vertical axis represents the residual value of the fitting. (<b>b</b>) The smoothing factor on the horizontal axis, where the blue curve represents the roughness and the red curve represents the residual value of the fitting. The red dot represents the optimal location.</p>
Full article ">Figure 6
<p>Fault surface slip distribution map. (<b>a</b>) The inversion results using only the strong-motion data, (<b>b</b>) the inversion results using only the GPS data, (<b>c</b>) the inversion results combining both types of data, and (<b>d</b>) the coseismic slip distribution based on inversion of the GPS data for the 2003 Chengkung earthquake. The red pentagon represents the location of the hypocenter. The white arrows represents the direction in which the subfault slip.</p>
Full article ">Figure 7
<p>Comparison between observed data and simulated data based on the joint inversion model. (<b>a</b>) The coseismic displacement field in the horizontal direction, and (<b>b</b>) the coseismic displacement field in the vertical direction. The red arrow represents the observed data, and the blue arrow represents the simulated data. The black line represents the Longitudinal Valley Fault zone. The green line represent the fault model used to invert.</p>
Full article ">Figure 8
<p>Slip distribution resolution test. (<b>a</b>) The input slip distribution, (<b>b</b>) the inversion results based on the strong-motion network, (<b>c</b>) the inversion results based on the GPS network, and (<b>d</b>) the joint inversion results based on both data types. The blue star represents the hypocenter of earthquake. SM: strong motion; GPS: Global Positioning System.</p>
Full article ">Figure 9
<p>Spatial position relationship between the Central Range and Longitudinal Valley Faults. The fault on the eastern boundary of the Central Range is the Central Range Fault, with the red dotted line indicating that the fault does not exposed to the surface, and the fault on the western boundary of the Coastal Range is the longitudinal valley fault. The red and blue stars indicate the hypocenter of the 2022 Mw7.0 earthquake and the 2003 Mw6.5 earthquake, respectively. CRF: Central Range Fault, LVF: Longitudinal Valley Fault.</p>
Full article ">
15 pages, 5099 KiB  
Article
The Latest Desertification Process and Its Driving Force in Alxa League from 2000 to 2020
by Jiali Xie, Zhixiang Lu, Shengchun Xiao and Changzhen Yan
Remote Sens. 2023, 15(19), 4867; https://doi.org/10.3390/rs15194867 - 8 Oct 2023
Cited by 6 | Viewed by 1782
Abstract
Alxa League of Inner Mongolia Autonomous Region is a concentrated desert distribution area in China, and the latest desertification process and its driving mechanism under the comprehensive influence of the extreme dry climate and intense human activities has attracted much attention. Landsat data, [...] Read more.
Alxa League of Inner Mongolia Autonomous Region is a concentrated desert distribution area in China, and the latest desertification process and its driving mechanism under the comprehensive influence of the extreme dry climate and intense human activities has attracted much attention. Landsat data, including ETM+ images obtained in 2000, TM images obtained in 2010, and OLI images obtained in 2020, were used to extract three periods of desertification land information using the classification and regression tree (CART) decision tree classification method in Alxa League. The spatio-temporal variation characteristics of desertification land were analyzed by combining the transfer matrix and barycenter migration model; the effects of climate change and human activities on regional desertification evolution were separated and recombined using the multiple regression residual analysis method and by considering the influence of non-zonal factors. The results showed that from 2000 to 2020, the overall area of desertification land in Alxa League was reduced, the desertification degree was alleviated, the desertification trend was reversed, and the desertification degree in the northern part of the region was more serious than in the southern part. The barycenter of the slight, moderate, and severe desertification land migrated to the southeast, whereas the serious desertification land’s barycenter migrated to the northwest in the period of 2000–2010; however, all of them hardly moved from 2010 to 2020. The degree of desertification reversal in the south was more significant than in the north. Regional desertification reversal was mainly influenced by the combination of human activities and climate change, and the area accounted for 61.5%; meanwhile, the localized desertification development was mainly affected by human activities and accounted for 76.8%. Full article
(This article belongs to the Special Issue Remote Sensing for Land Degradation and Drought Monitoring II)
Show Figures

Figure 1

Figure 1
<p>The study area with monitoring stations.</p>
Full article ">Figure 2
<p>Flow chart of the desertification information extraction.</p>
Full article ">Figure 3
<p>Spatial distribution pattern of desertification land in 2020 in Alxa League.</p>
Full article ">Figure 4
<p>Desertification land transfer matrix in Alxa League from 2000 to 2020.</p>
Full article ">Figure 5
<p>The migration of barycenters of different types of desertification land in different periods.</p>
Full article ">Figure 6
<p>Spatial distribution of the relative roles of climate, human activities, and the combination of these two factors on the exacerbation and mitigation of desertification.</p>
Full article ">Figure 7
<p>The climatic changes from 2000 to 2020 in Alxa League, Inner Mongolia Autonomous Region, China: (<b>a</b>) annual precipitation; (<b>b</b>) annual mean temperature.</p>
Full article ">
33 pages, 5284 KiB  
Review
Advancing Skyborne Technologies and High-Resolution Satellites for Pasture Monitoring and Improved Management: A Review
by Michael Gbenga Ogungbuyi, Caroline Mohammed, Iffat Ara, Andrew M. Fischer and Matthew Tom Harrison
Remote Sens. 2023, 15(19), 4866; https://doi.org/10.3390/rs15194866 - 8 Oct 2023
Cited by 8 | Viewed by 3063
Abstract
The timely and accurate quantification of grassland biomass is a prerequisite for sustainable grazing management. With advances in artificial intelligence, the launch of new satellites, and perceived efficiency gains in the time and cost of the quantification of remote methods, there has been [...] Read more.
The timely and accurate quantification of grassland biomass is a prerequisite for sustainable grazing management. With advances in artificial intelligence, the launch of new satellites, and perceived efficiency gains in the time and cost of the quantification of remote methods, there has been growing interest in using satellite imagery and machine learning to quantify pastures at the field scale. Here, we systematically reviewed 214 journal articles published between 1991 to 2021 to determine how vegetation indices derived from satellite imagery impacted the type and quantification of pasture indicators. We reveal that previous studies have been limited by highly spatiotemporal satellite imagery and prognostic analytics. While the number of studies on pasture classification, degradation, productivity, and management has increased exponentially over the last five years, the majority of vegetation parameters have been derived from satellite imagery using simple linear regression approaches, which, as a corollary, often result in site-specific parameterization that become spurious when extrapolated to new sites or production systems. Few studies have successfully invoked machine learning as retrievals to understand the relationship between image patterns and accurately quantify the biophysical variables, although many studies have purported to do so. Satellite imagery has contributed to the ability to quantify pasture indicators but has faced the barrier of monitoring at the paddock/field scale (20 hectares or less) due to (1) low sensor (coarse pixel) resolution, (2) infrequent satellite passes, with visibility in many locations often constrained by cloud cover, and (3) the prohibitive cost of accessing fine-resolution imagery. These issues are perhaps a reflection of historical efforts, which have been directed at the continental or global scales, rather than at the field level. Indeed, we found less than 20 studies that quantified pasture biomass at pixel resolutions of less than 50 hectares. As such, the use of remote sensing technologies by agricultural practitioners has been relatively low compared with the adoption of physical agronomic interventions (such as ‘no-till’ practices). We contend that (1) considerable opportunity for advancement may lie in fusing optical and radar imagery or hybrid imagery through the combination of optical sensors, (2) there is a greater accessibility of satellite imagery for research, teaching, and education, and (3) developers who understand the value proposition of satellite imagery to end users will collectively fast track the advancement and uptake of remote sensing applications in agriculture. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Global distribution of pastures and assessment using remote sensing tools. “Global assessment of land degradation and improvement 1. Identification by remote sensing” (Bai et al., 2008 [<a href="#B15-remotesensing-15-04866" class="html-bibr">15</a>]). Global grassland classification was embellished to the original map by the authors.</p>
Full article ">Figure 2
<p>Number of studies from each country and across continents reviewed. Note: the blue and orange colours represent the ratio of the number of studies (blue) compared to the total number of studies (orange).</p>
Full article ">Figure 3
<p>Temporal (annual) pattern of studies reviewed by their topics of coverage. Bars indicate the number of studies published each year.</p>
Full article ">Figure 4
<p>(<b>a</b>) The two main drivers of pasture variability, climate, and anthropogenic (<b>b</b>) studies using remote sensing to understand how adaptive pasture management could be used to mitigate climate change. Rainfall and temperature variables are regarded as weather and climate data.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) The two main drivers of pasture variability, climate, and anthropogenic (<b>b</b>) studies using remote sensing to understand how adaptive pasture management could be used to mitigate climate change. Rainfall and temperature variables are regarded as weather and climate data.</p>
Full article ">Figure 5
<p>(<b>a</b>) Remote sensing instruments to study pasture conditions. (<b>b</b>) Number of studies according to how instruments were combined for investigation. OO = combination of optical instruments, OR= combination of optical and radar instruments, and UAS = unmanned aerial systems.</p>
Full article ">Figure 6
<p>Number of studies ranked by the topics covered: OO = combinational of optical instruments, OR= optical and radar instruments; and UAS = unmanned aerial systems.</p>
Full article ">Figure 7
<p>Summary of the scale of focus enabled by satellite sensors. Note: NS = “Not specified” for studies that do not provide a definite statement about the scale of coverage in the reviewed studies. Studies (i.e., 63) indicated study locations without providing details about the scale of focus [<a href="#B76-remotesensing-15-04866" class="html-bibr">76</a>,<a href="#B152-remotesensing-15-04866" class="html-bibr">152</a>,<a href="#B153-remotesensing-15-04866" class="html-bibr">153</a>,<a href="#B154-remotesensing-15-04866" class="html-bibr">154</a>].</p>
Full article ">Figure 8
<p>(<b>a</b>) A high PlanetScope imagery quantifying pasture biomass variation at paddock level (image acquired from Planet Lab Inc.; and accessed on 6 April 2021); (<b>b</b>) was georeferenced from (<b>a</b>). Landholders can make management decisions based on pasture availability. (<b>b</b>) was georeferenced using the map features provided (i.e., Ngahinapouri, Waipa District, Waikato, 3882, New Zealand).</p>
Full article ">Figure 9
<p>The frequency with which pastures were monitored via satellite imagery passes.</p>
Full article ">Figure A1
<p>Flowchart describing the systematic literature process.</p>
Full article ">
29 pages, 8828 KiB  
Review
An Overview of Coastline Extraction from Remote Sensing Data
by Xixuan Zhou, Jinyu Wang, Fengjie Zheng, Haoyu Wang and Haitao Yang
Remote Sens. 2023, 15(19), 4865; https://doi.org/10.3390/rs15194865 - 8 Oct 2023
Cited by 18 | Viewed by 6040
Abstract
The coastal zone represents a unique interface between land and sea, and addressing the ecological crisis it faces is of global significance. One of the most fundamental and effective measures is to extract the coastline’s location on a large scale, dynamically, and accurately. [...] Read more.
The coastal zone represents a unique interface between land and sea, and addressing the ecological crisis it faces is of global significance. One of the most fundamental and effective measures is to extract the coastline’s location on a large scale, dynamically, and accurately. Remote sensing technology has been widely employed in coastline extraction due to its temporal, spatial, and sensor diversity advantages. Substantial progress has been made in coastline extraction with diversifying data types and information extraction methods. This paper focuses on discussing the research progress related to data sources and extraction methods for remote sensing-based coastline extraction. We summarize the suitability of data and some extraction algorithms for several specific coastline types, including rocky coastlines, sandy coastlines, muddy coastlines, biological coastlines, and artificial coastlines. We also discuss the significant challenges and prospects of coastline dataset construction, remotely sensed data selection, and the applicability of the extraction method. In particular, we propose the idea of extracting coastlines based on the coastline scene knowledge map (CSKG) semantic segmentation method. This review serves as a comprehensive reference for future development and research pertaining to coastal exploitation and management. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Number of publications in the last decade based on the WOS research publication database and obtained by searching keywords: (<b>a</b>) “remote sensing” and “coastline”, (<b>b</b>) “remote sensing” and “coastline extraction” or “shoreline extraction”, and (<b>c</b>) “machine learning” and “remote sensing” and “coastline extraction” or “shoreline extraction”.</p>
Full article ">Figure 2
<p>A schematic of a rocky coastline. The rocky shoreline is located in Jekyll Island State Park, USA. Currently exposed revetment rocks are shown in beach segments (A) and (B). (B) Part of the revetment rock shown in (A) has been buried by sand, only approximately 3.5 km of rocks from (b) to (a) are still exposed [<a href="#B53-remotesensing-15-04865" class="html-bibr">53</a>].</p>
Full article ">Figure 3
<p>Satellite image of the Shamada research site obtained by the Sentinel-2 satellite. The black line shows the sandy coastline [<a href="#B57-remotesensing-15-04865" class="html-bibr">57</a>].</p>
Full article ">Figure 4
<p>Photo of silty coast [<a href="#B56-remotesensing-15-04865" class="html-bibr">56</a>].</p>
Full article ">Figure 5
<p>Photo of biological coast [<a href="#B56-remotesensing-15-04865" class="html-bibr">56</a>].</p>
Full article ">Figure 6
<p>Satellite image of artificial coast image obtained by the Sentinel-2 satellite.</p>
Full article ">Figure 7
<p>Methods that have been used in the past which have been applied to the extraction of the coastline (<a href="#remotesensing-15-04865-t002" class="html-table">Table 2</a>).</p>
Full article ">Figure 8
<p>Comparison of different extraction methods: (<b>a</b>) MNDWI, (<b>b</b>) OTSU after MNDWI, (<b>c</b>) morphological processes after OTSU, (<b>d</b>) Canny edge detection combined with OTSU and morphological processes, and (<b>e</b>) Canny edge detection without OTSU [<a href="#B93-remotesensing-15-04865" class="html-bibr">93</a>].</p>
Full article ">Figure 9
<p>The process of approaching the contour line to the edge line in the coastline extraction task [<a href="#B99-remotesensing-15-04865" class="html-bibr">99</a>]. (<b>a</b>) shows initial contour with initial values; (<b>b</b>) shows contour with final values.</p>
Full article ">Figure 10
<p>Workflow of using the combination of remote sensing data and machine learning methods for extracting coastline.</p>
Full article ">Figure 11
<p>Coastline extraction process based on CNN model [<a href="#B130-remotesensing-15-04865" class="html-bibr">130</a>].</p>
Full article ">Figure 12
<p>The structure of U-NET for extracting coastlines. The encoder and the decoder are used to calculate a pyramid of feature maps [<a href="#B133-remotesensing-15-04865" class="html-bibr">133</a>].</p>
Full article ">Figure 13
<p>The construction of a remote sensing knowledge graph (RSKG) [<a href="#B25-remotesensing-15-04865" class="html-bibr">25</a>].</p>
Full article ">Figure 14
<p>The construction of a coastline scene knowledge graph (CSKG).</p>
Full article ">
Previous Issue
Back to TopTop