[go: up one dir, main page]

Next Issue
Volume 16, January-2
Previous Issue
Volume 15, December-2
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 16, Issue 1 (January-1 2024) – 210 articles

Cover Story (view full-size image): Accurate information on land use and land cover (LULC) is crucial for effective regional land and forest management. This study addresses the challenge of obtaining reliable LULC information of an intricate Wunbaik Mangrove Area in Myanmar by employing a U-Net deep learning model with multisource satellite imageries. The models are trained and assessed using labeled images created from ground truth and evaluated for each class. The study will contribute to the optimal utilization of multisource remote sensing data and advanced classification methods for accurate LULC mapping of mangrove ecosystems. The proposed findings on LULC information have practical implications for implementing conservation measures, thereby contributing to the sustainable management of this unique mangrove forest area. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 11428 KiB  
Article
Evaluation of GSMaP Version 8 Precipitation Products on an Hourly Timescale over Mainland China
by Xiaoyu Lv, Hao Guo, Yunfei Tian, Xiangchen Meng, Anming Bao and Philippe De Maeyer
Remote Sens. 2024, 16(1), 210; https://doi.org/10.3390/rs16010210 - 4 Jan 2024
Cited by 4 | Viewed by 3046
Abstract
A thorough evaluation of the recently released Global Satellite Mapping of Precipitation (GSMaP) is critical for both end-users and algorithm developers. In this study, six products from three versions of GSMaP version 8, including real time (NOW-R and NOW-C), near real time (NRT-R [...] Read more.
A thorough evaluation of the recently released Global Satellite Mapping of Precipitation (GSMaP) is critical for both end-users and algorithm developers. In this study, six products from three versions of GSMaP version 8, including real time (NOW-R and NOW-C), near real time (NRT-R and NRT-C), and post-real time (MVK-R and MVK-C), are systematically and quantitatively evaluated based on time-by-time observations from 2167 stations in mainland China. Among each version, both products with and without gauge correction are adopted to detect the gauge correction effect. Error quantification is carried out on an hourly timescale. Three common statistical indices (i.e., correlation coefficient (CC), relative bias (RB), and root mean square error (RMSE)) and three event detection capability indices (i.e., probability of detection (POD), false alarm ratio (FAR), and critical success index (CSI)) were adopted to analyze the inversion errors in precipitation amount and precipitation event frequency across the various products. Additionally, in this study, we examine the dependence of GSMaP errors on rainfall intensity and elevation. The following main results can be concluded: (1) MVK-C exhibits the best ability to retrieve rainfall on the hourly timescale, with higher CC values (0.31 in XJ to 0.47 in SC), smaller RMSE values (0.14 mm/h in XJ to 0.99 mm/h in SC), and lower RB values (−4.78% in XJ to 16.03% in NC). (2) Among these three versions, the gauge correction procedure plays a crucial role in reducing errors, especially in the post-real-time version. After being corrected, MVK-C demonstrates an obvious CC value improvement (>0.3 on the hourly timescale) in various sub-regions, increasing the percentage of sites with CC values above 0.5 from 0.03% (MVK-R) to 28.47% (MVK-C). (3) GSMaP products generally exhibit error dependencies on precipitation intensity and elevation, particularly in areas with drastic elevation changes (such as 1200–1500 m and 3000–3300 m), where the accuracy of satellite precipitation estimates is significantly affected. (4) CC values decreased with an increasing rainfall intensity; RB and RMSE values increased with an increasing rainfall intensity. The results of this study may be helpful for algorithm developers and end-users and provide a scientific reference for different hydrological applications and disaster risk reduction. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Digital elevation model (DEM) and (<b>b</b>) locations of meteorological stations in mainland China.</p>
Full article ">Figure 2
<p>The boxplot for precipitation amount statistics for the CC (left column), RB (middle column), and RMSE (right column) in seven sub-regions on an hourly timescale: (<b>a</b>–<b>c</b>) NC (northern China); (<b>d</b>–<b>f</b>) NE (northeastern China); (<b>g</b>–<b>i</b>) NW (northwestern China); (<b>j</b>–<b>l</b>) SC (southeastern China); (<b>m</b>–<b>o</b>) TP (Qinghai–Tibetan Plateau); (<b>p</b>–<b>r</b>) YP (Yungui Plateau); (<b>s</b>–<b>u</b>) XJ (Xinjiang). The upper and lower edges of the box plot represent the first and third quartiles, the line inside the box represents the median, the dot represents the mean, and the black circles show mean anomalies.</p>
Full article ">Figure 3
<p>Spatial distribution of the CC (left column (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>,<b>m</b>,<b>p</b>)), RB (middle column (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>,<b>n</b>,<b>q</b>)), and RMSE (right column (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>,<b>o</b>,<b>r</b>)) between hourly precipitation data from 2167 stations and SPEs in mainland China.</p>
Full article ">Figure 4
<p>Taylor diagram with the CC, STD, and RMSE of hourly mean precipitation between the SPEs and gauge observations for each sub-region: (<b>a</b>) NC; (<b>b</b>) NE; (<b>c</b>) NW; (<b>d</b>) SC; (<b>e</b>) TB; (<b>f</b>) YP; (<b>g</b>) XJ.</p>
Full article ">Figure 5
<p>Scatter density plots of the rainfall rate in mainland China for six types of SPE: (<b>a</b>) MVK-R; (<b>b</b>) MVK-C; (<b>c</b>) NRT-R; (<b>d</b>) NRT-C; (<b>e</b>) NOW-R; (<b>f</b>) NOW-C.</p>
Full article ">Figure 6
<p>The barplot for categorical indicators, (<b>a</b>) POD, (<b>b</b>) FAR, and (<b>c</b>) CSI, for seven sub-regions on an hourly timescale.</p>
Full article ">Figure 7
<p>Error dependence in the (<b>a</b>) CC, (<b>b</b>) RB, and (<b>c</b>) RMSE on elevation.</p>
Full article ">Figure 8
<p>Error dependence in (<b>a</b>) CC, (<b>b</b>) RB, and (<b>c</b>) RMSE on precipitation intensity.</p>
Full article ">Figure 9
<p>Taylor diagram showing the CC, STD, and RMSE of daily mean precipitation between the satellite-based precipitation products and the reference data for each sub-region (<b>a</b>) NC; (<b>b</b>) NE; (<b>c</b>) NW; (<b>d</b>) SC; (<b>e</b>) TB; (<b>f</b>) YP; and (<b>g</b>) XJ.</p>
Full article ">Figure 10
<p>The barplot for categorical indicators, (<b>a</b>) POD, (<b>b</b>) FAR, and (<b>c</b>) CSI, for seven sub-regions on the daily timescale.</p>
Full article ">Figure A1
<p>Probability Density Function (PDF) of CC (left column (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>,<b>m</b>,<b>p</b>)), RB (middle column (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>,<b>n</b>,<b>q</b>)), and RMSE (right column (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>,<b>o</b>,<b>r</b>)).</p>
Full article ">
32 pages, 12596 KiB  
Article
Multi-Timescale Characteristics of Southwestern Australia Nearshore Surface Current and Its Response to ENSO Revealed by High-Frequency Radar
by Hongfei Gu and Yadan Mao
Remote Sens. 2024, 16(1), 209; https://doi.org/10.3390/rs16010209 - 4 Jan 2024
Cited by 1 | Viewed by 1897
Abstract
The surface currents in coastal areas are closely related to the ecological environment and human activities, and are influenced by both local and remote factors of different timescales, resulting in complex genesis and multi-timescale characteristics. In this research, 9-year-long, hourly high-frequency radar (HFR) [...] Read more.
The surface currents in coastal areas are closely related to the ecological environment and human activities, and are influenced by both local and remote factors of different timescales, resulting in complex genesis and multi-timescale characteristics. In this research, 9-year-long, hourly high-frequency radar (HFR) surface current observations are utilized together with satellite remote sensing reanalysis products and mooring data, and based on the Empirical Orthogonal Function (EOF) and correlation analysis, we revealed the multi-timescale characteristics of the surface currents in Fremantle Sea (32°S), Southwestern Australia, and explored the corresponding driving factors as well as the impact of El Niño-Southern Oscillation (ENSO) on the nearshore currents. Results show that the currents on the slope are dominated by the southward Leeuwin Current (LC), and the currents within the shelf are dominated by winds, which are subject to obvious diurnal and seasonal variations. The strong bathymetry variation there, from a wide shelf in the north to a narrow shelf in this study region, also plays an important role, resulting in the frequent occurrence of nearshore eddies. In addition, the near-zonal winds south of 30°S in winter contribute to the interannual variability of the Leeuwin Current at Fremantle, especially in 2011, when the onshore shelf circulation is particularly strong because of the climatic factors, together with the wind-driven offshore circulation, which results in significant and long-lasting eddies. The southward Leeuwin Current along Southwestern Australia shows a strong response to interannual climatic variability. During La Niña years, the equatorial thermal anomalies generate the westward anomalies in winds and equatorial currents, which in turn strengthen the Leeuwin Current and trigger the cross-shelf current as well as downwelling within the shelf at Fremantle, whereas during El Niño years, the climate anomalies and the response of coastal currents are opposite. This paper provides insights into the multi-timescale nature of coastal surface currents and the relative importance of different driving mechanisms. It also demonstrates the potential of HFR to reveal the response of nearshore currents to climate anomalies when combined with other multivariate data. Meanwhile, the methodology adopted in this research is applicable to other coastal regions with long-term available HFR observations. Full article
Show Figures

Figure 1

Figure 1
<p>Western Australia, the ROT radar site located in Fremantle Sea, and the effective sampling proportion of radar. (<b>a</b>) The research area, North West Cape is the north-south boundary of Western Australia; the red (blue) arrows represent Leeuwin Current (Capes Current) [<a href="#B31-remotesensing-16-00209" class="html-bibr">31</a>]; the magenta (blue) dashed box indicates the range of driving factors data used to investigate the causes of diurnal (seasonal and interannual) characteristics of the surface currents; (<b>b</b>) The observing range of ROT radar and the spatial distribution of temporal effective sampling proportion of it; WATR10, WATR20, and WACA20 are 3 mooring sites; the white (black) dashed box indicates Perth Canyon (Rottnest Island); (<b>c</b>) The percentage ratio of radar observing hours per month.</p>
Full article ">Figure 2
<p>Frequency spectra of the hourly HFR-derived current component time series. (<b>A1</b>–<b>F1</b>) shows the original hourly time series; (<b>A2</b>–<b>F2</b>) shows the corresponding frequency spectra of (<b>A1</b>–<b>F1</b>), in which the black vertical dashed lines indicate the Freq value of −3.9425 (i.e., 365 days), and the red dashed boxes indicate significant spectra peaks and intervals; the ‘Freq’ and ‘Amp’ values represent the frequency and amplitude, respectively; the ‘h’ of the Freq value represents one hour; the minimum value of the horizontal axis is −4.2553 (i.e., 750 days).</p>
Full article ">Figure 3
<p>Diurnal, seasonal, and interannual statistical values of the ROT radar-observed surface currents. (<b>A1</b>–<b>D2</b>) show the mean values of currents during the 4 diurnal and seasonal periods, respectively; the arrows represent the current direction and the color bars represent the absolute value of current speed; (<b>A3</b>–<b>D3</b>) show the RMS values of current interannual STD; the scales of horizontal and vertical lines of the cross (‘+’) represent the variability of current U and V components, respectively; ‘GUI’ and ‘FRE’ represent Guildton and Fremantle, respectively; ‘MAM’, ‘JJA’, SON’, and ‘DJF’ represent the austral autumn (March to May), winter (June to August), spring (September to November), and summer (December to February), respectively.</p>
Full article ">Figure 4
<p>The mean flow vectors in three spatial units at three timescales. Each subplot is labeled with spatial units, as well as the maximum (red ‘Max’), minimum (blue ‘Min’), and average (black ‘Mean’) speed value of the current; Each column of subplots belongs to the same timescale.</p>
Full article ">Figure 5
<p>Histogram statistics of the surface current direction at three timescales in three spatial units. Each subplot is labeled with timescales, as well as the maximum value (red ‘Max’), mean value (blue ‘Mean’), and median value (black ‘Median’) of current speed; Each column of subplots belongs to the same spatial unit.</p>
Full article ">Figure 6
<p>Primary EOF modes of the HFR-derived current at three timescales. Each column of subplots belongs to the same timescale; (<b>A1</b>–<b>C3</b>) display the spatial modes (EOF) in the upper panels and the corresponding temporal coefficient (PC) in the lower panels, the titles for each subplot indicate the timescale and the modal sequence number; The contribution to the total variance of each mode is marked in red ‘Contr’ and the following number; The color on each grid indicates the relative (normalized) speed of the current EOF mode. The red lines in (<b>B1</b>,<b>C1</b>) represent the seasonal and interannual signals of Fremantle Sea Level (in units of mm); The data are not converted into anomalies before EOF analysis (thus the contribution coefficients of the modes behind EOF1 are relatively low) to primarily reflect the mean state but not variation of current.</p>
Full article ">Figure 7
<p>The spatially averaged U-component time series of the HFR-derived current and winds in 3 spatial units at three timescales and their correlation; Each column of subplots belongs to the same timescale; The correlation coefficient between each pair of timeseries is marked as ‘Corr’; (<b>A1</b>–<b>C3</b>) display the comparison results of the wind and current time series; (<b>D1</b>–<b>D3</b>) display the scatter plot analysis results for diurnal, seasonal and interannual scales, respectively.</p>
Full article ">Figure 8
<p>The spatially averaged V-component time series of the HFR-derived current and wind in 3 spatial units at three timescales and their correlation; Each column of subplots belongs to the same timescale; The correlation coefficient between each pair of timeseries is marked as ‘Corr’; (<b>A1</b>–<b>C3</b>) display the comparison results of the wind and current time series; (<b>D1</b>–<b>D3</b>) display the scatter plot analysis results for diurnal, seasonal and interannual scales, respectively.</p>
Full article ">Figure 9
<p>Typical wind (<b>A1</b>–<b>C2</b>), satellite-observed current (<b>D</b>), and SSH (<b>E</b>) EOF modes correlated with the primary HFR current EOF modes at three timescales; The legend of each PC subplot indicates the mode pair; The ‘Corr’ marks the corresponding correlation coefficient; The blue dot indicates Fremantle; For each subplot, the upper panel displays the spatial mode (EOF) and the lower panel displays the comparison of PCs indicated by the legend; In (<b>D</b>), the ‘SC’ indicates the satellite-observed current with a broader spatial range than HFR current.</p>
Full article ">Figure 10
<p>Diurnal variation of the spatially averaged HFR-derived current in three spatial units and four seasons. For each subplot, the spatial unit, as well as the maximum (red ‘Max’), minimum (blue ‘Min’), and average (black ‘Mean’) current speed are labeled; Each column of subplots belongs to the same season.</p>
Full article ">Figure 11
<p>Diurnal variation of winds in 4 seasons (from March 2010 to February 2019) in Southwestern Australia. For each subplot, the season, diurnal duration are labeled in the title; Each row of subplots belongs to the same season (e.g., (<b>A1</b>–<b>D1</b>) show the results for DJF i.e. austral summer. (<b>A2</b>–<b>D2</b>) show the results for MAM i.e. austral autumn; (<b>A3</b>–<b>D3</b>) show the results for JJA i.e. austral winter; (<b>A4</b>–<b>D4</b>) show the results for SON i.e. austral spring).</p>
Full article ">Figure 12
<p>Correlation of SOI with FSL, satellite-observed current, and SSH in Southwestern Australia. (<b>a</b>–<b>c</b>) shows the spatial distribution of the correlation coefficients; (<b>d</b>) shows the time series of SOI and FSL; (<b>e</b>,<b>f</b>) shows the cross-shore and alongshore currents in two spatial grids; The black pentagrams and rhombus in (<b>a</b>–<b>c</b>) indicate two sampling points; the black dashed boxes in each subplot indicate important features; and the white dashed box in (<b>c</b>) indicates Perth Canyon.</p>
Full article ">Figure 13
<p>Composite satellite-observed surface current anomalies before and after the peak months of El Niño and La Niña events in Southwestern Australia. The blue dot indicates Fremantle; The red (blue) contour indicates the 50 m (2000 m) isobath; The northern and southern magenta dashed boxes in (<b>d</b>) indicate the slope narrowing and Pearh Canyon, respectively; The white dashed boxes in (<b>a</b>,<b>e</b>,<b>g</b>,<b>k</b>) are used to compare the current anomalies in the same region at different durations; (<b>a</b>–<b>l</b>) display the current anomalies during the composite El Niño and La Niña year, respectively.</p>
Full article ">Figure 14
<p>The mean states (<b>a</b>–<b>d</b>) and anomalies (<b>e</b>–<b>h</b>) of winter and summer HFR-derived currents during 2 typical ENSO events. The blue dot indicates Fremantle, and the color bars for winter and summer currents have different speed value ranges (0–0.8 m/s and 0–0.4 m/s).</p>
Full article ">Figure 15
<p>Comparison of the histogram statistics of the current components of winter and summer HFR-derived current in the Fremantle inner and outer shelf during 2 typical ENSO events. The ‘n1’ (blue) represents the number of sampling hours in each period, and ‘n2’ (red) represents the number of intersecting sampling hours of the comparison periods (such as (<b>a</b>) to (<b>b</b>)), Only the results belonging to the n2 h are calculated to avoid statistical bias; In (<b>a</b>,<b>b</b>), the red and blue dashed boxes indicate the comparison of outer shelf current zonal components in two winters, which is the most significant finding in this figure; The black dashed boxes in (<b>c</b>–<b>h</b>) are also used to compare the current components in different durations.</p>
Full article ">Figure 16
<p>Differences in sea interior response during El Niño and La Niña as reflected by moorings and HFR together. Multilayer temperatures (<b>A1</b>,<b>A2</b>), top-bottom temperature difference (<b>B1</b>,<b>B2</b>), vertical velocity at the mooring site of WATR10 (<b>C1</b>,<b>C2</b>), the averaged outer shelf current U-component (<b>D1</b>,<b>D2</b>), and the averaged inner shelf current V-component (<b>E1</b>,<b>E2</b>) measured by HFR during the same periods; In (<b>A1</b>–<b>E1</b>), the red dashed box indicates a downwelling event, which were captured by both the mooring and HFR; In (<b>A2</b>), the blue box indicates a water temperature rising during the La Niña winter.</p>
Full article ">Figure 17
<p>Comparison of the top-bottom temperature difference (<b>A1</b>,<b>A2</b>) (WATR20) and the U (<b>B1</b>,<b>B2</b>), V (<b>C1</b>,<b>C2</b>), and W (<b>D1</b>,<b>D2</b>) components of the current (WACA20) observed by moorings in winter during 2 typical ENSO events. The meanings of ‘n1’ and ‘n2’ are the same as in <a href="#remotesensing-16-00209-f015" class="html-fig">Figure 15</a>; In (<b>A1</b>,<b>A2</b>) and (<b>D1</b>,<b>D2</b>), the red and blue boxes are used to compare the mooring-observed top-bottom temperature differences (current W-components) at two durations.</p>
Full article ">
16 pages, 2672 KiB  
Technical Note
Ozone Trend Analysis in Natal (5.4°S, 35.4°W, Brazil) Using Multi-Linear Regression and Empirical Decomposition Methods over 22 Years of Observations
by Hassan Bencherif, Damaris Kirsch Pinheiro, Olivier Delage, Tristan Millet, Lucas Vaz Peres, Nelson Bègue, Gabriela Bittencourt, Maria Paulete Pereira Martins, Francisco Raimundo da Silva, Luiz Angelo Steffenel, Nkanyiso Mbatha and Vagner Anabor
Remote Sens. 2024, 16(1), 208; https://doi.org/10.3390/rs16010208 - 4 Jan 2024
Cited by 2 | Viewed by 2124
Abstract
Ozone plays an important role in the Earth’s atmosphere. It is mainly formed in the tropical stratosphere and is transported by the Brewer–Dobson Circulation to higher latitudes. In the stratosphere, ozone can filter the incoming solar ultraviolet radiation, thus protecting life at the [...] Read more.
Ozone plays an important role in the Earth’s atmosphere. It is mainly formed in the tropical stratosphere and is transported by the Brewer–Dobson Circulation to higher latitudes. In the stratosphere, ozone can filter the incoming solar ultraviolet radiation, thus protecting life at the surface. Although tropospheric ozone accounts for only ~10%, it is a powerful GHG and pollutant, harmful to the health of the environment and living beings. Several studies have highlighted biomass burning as a major contributor to the tropospheric ozone budget. Our study focuses on the Natal site (5.40°S, 35.40°W, Brazil), one of the oldest ozone-observing stations in Brazil, which is expected to be influenced by fire plumes in Africa and Brazil. Many studies that examined ozone trends used the total atmospheric columns of ozone, but it is important to assess ozone separately in the troposphere and the stratosphere. In this study, we have used radiosonde ozone profiles and daily TCO measurements to evaluate the variability and changes of both tropospheric and stratospheric ozone separately. The dataset in this study comprises daily total columns of colocalized ozone and weekly ozone profiles collected between 1998 and 2019. The tropospheric columns were estimated by integrating ozone profiles measured by ozone sondes up to the tropopause height. The amount of ozone in the stratosphere was then deduced by subtracting the tropospheric ozone amount from the total amount of ozone measured by the Dobson spectrometer. It was assumed that the amount of ozone in the mesosphere is negligible. This produced three distinct time series of ozone: tropospheric and stratospheric columns as well as total columns. The present study aims to apply a new decomposition method named Empirical Adaptive Wavelet Decomposition (EAWD) that is used to identify the different modes of variability present in the analyzed signal. This is achieved by summing up the most significant Intrinsic Mode Functions (IMF). The Fourier spectrum of the original signal is broken down into spectral bands that frame each IMF obtained by the Empirical Modal Decomposition (EMD). Then, the Empirical Wavelet Transform (EWT) is applied to each interval. Unlike other methods like EMD and multi-linear regression (MLR), the EAWD technique has an advantage in providing better frequency resolution and thus overcoming the phenomenon of mode-mixing, as well as detecting possible breakpoints in the trend mode. The obtained ozone datasets were analyzed using three methods: MLR, EMD, and EAWD. The EAWD algorithm exhibited the advantage of retrieving ~90% to 95% of ozone variability and detecting possible breakpoints in its trend component. Overall, the MRL and EAWD methods showed almost similar trends, a decrease in the stratosphere ozone (−1.3 ± 0.8%) and an increase in the tropospheric ozone (+4.9 ± 1.3%). This study shows the relevance of combining data to separately analyze tropospheric and stratospheric ozone variability and trends. It highlights the advantage of the EAWD algorithm in detecting modes of variability in a geophysical signal without prior knowledge of the underlying forcings. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical position of the study site of Natal in the north of Brazil (the red star symbol), an equatorial location (5.4°S, 35.4°W) operated by the INPE (National Institute for Space Research), Brazil. Cachoeira (22.68°S, 45.00°W), Irene (25.90°S, 28.22°E), and Reunion (20.89°S, 55.53°E) sites are indicated by red dots.</p>
Full article ">Figure 2
<p>Monthly time-series of total, stratospheric (<b>upper panel</b>) and tropospheric (<b>lower panel</b>) columns of ozone at Natal (5.40°S, 35.40°W), Rio Grande do Norte state, Brazil, obtained by combining and merging ground-based measurements (balloon-sonde profiles and Dobson total columns) and satellite observations from TOMS (until 2005), OMI, and OMI–MLS (see the legend). The merged ozone time series are shown with continuous lines.</p>
Full article ">Figure 3
<p>(<b>a</b>) IMF1, (<b>b</b>) IMF2, (<b>c</b>) IMF3, (<b>d</b>) IMF4, and (<b>e</b>) IMF5, as detected by the EMD (blue line) and EAWD (red line) algorithms applied the TCO at Natal over the 1998–2019 period of observation. Plot (<b>f</b>) superimposes their spectral densities.</p>
Full article ">Figure 3 Cont.
<p>(<b>a</b>) IMF1, (<b>b</b>) IMF2, (<b>c</b>) IMF3, (<b>d</b>) IMF4, and (<b>e</b>) IMF5, as detected by the EMD (blue line) and EAWD (red line) algorithms applied the TCO at Natal over the 1998–2019 period of observation. Plot (<b>f</b>) superimposes their spectral densities.</p>
Full article ">Figure 4
<p>Zonal wind at the 20 hPa pressure level over Singapore (1°N, 104°E), corresponding to the QBO index (blue line), superimposed on the sum of IMF3 and IMF4 standing respectively for QBO1 and QBO2, and periods of 26 and 33 months (red line) over the study period from 1998 to 2019.</p>
Full article ">Figure 5
<p>Ozone time series and trend curves as derived from the MLR and EAWD methods for (<b>a</b>) TCO, (<b>b</b>) Strat–CO, and (<b>c</b>) Trop–CO, respectively, in blue and red lines, from 1998 to 2019.</p>
Full article ">
27 pages, 19128 KiB  
Article
Aerosol Optical Properties Retrieved by Polarization Raman Lidar: Methodology and Strategy of a Quality-Assurance Tool
by Song Mao, Zhenping Yin, Longlong Wang, Yubin Wei, Zhichao Bu, Yubao Chen, Yaru Dai, Detlef Müller and Xuan Wang
Remote Sens. 2024, 16(1), 207; https://doi.org/10.3390/rs16010207 - 4 Jan 2024
Cited by 3 | Viewed by 1881
Abstract
Aerosol optical properties retrieved using polarization Raman lidar observations play an increasingly vital role in meteorology and environmental protection. The quality of the data products directly affects the impact of relevant scientific applications. However, the quality of aerosol optical properties retrieved from polarization [...] Read more.
Aerosol optical properties retrieved using polarization Raman lidar observations play an increasingly vital role in meteorology and environmental protection. The quality of the data products directly affects the impact of relevant scientific applications. However, the quality of aerosol optical properties retrieved from polarization Raman lidar signals is difficult to assess. Various factors, such as hardware system performance, retrieval algorithm, and meteorological conditions at the observation site, influence data quality. In this study, we propose a method that allows for assessing the reliability of aerosol optical properties derived from polarization Raman lidar observations. We analyze the factors that affect the reliability of retrieved aerosol optical properties. We use scoring methods combined with a weight-assignment scheme to evaluate the quality of the retrieved aerosol optical properties. The scores and weights of each factor are arranged based on our analysis of a simulation study and the characteristics of each factor. We developed an automatic retrieval algorithm that allows for deriving homogeneous aerosol optical data sets. We also assess with this method the quality of retrieved aerosol optical properties obtained with different polarization Raman lidars under different measurement scenarios. Our results show that the proposed quality assurance method can distinguish the reliability of the retrieved aerosol optical properties. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of automatic retrieval algorithm for polarization Raman lidar.</p>
Full article ">Figure 2
<p>Flowchart of automatic gluing algorithm.</p>
Full article ">Figure 3
<p>Example of automatic gluing. (<b>a</b>,<b>c</b>,<b>e</b>) are the RCS of the parallel polarization, Raman, and cross-polarization channels before gluing, respectively. (<b>b</b>,<b>d</b>,<b>f</b>) are the gluing results. The black box marks the signal gluing interval. The red point is the gluing point. The numbers under the black boxes in parentheses and below are the signal gluing interval and the score of gluing. P and S denote the parallel-polarization and cross-polarization channels, respectively. H and L denote the far-range and near-range, respectively.</p>
Full article ">Figure 4
<p>Flowchart of the Rayleigh-fit procedure.</p>
Full article ">Figure 5
<p>Examples of Rayleigh fit. (<b>a</b>–<b>c</b>) are the results of Rayleigh fit for three scenarios. The green-shaded height regions are the Rayleigh-fit intervals. (<b>d</b>–<b>f</b>) are the differences between the RCS of lidar signals and atmospheric molecular Rayleigh fit signals.</p>
Full article ">Figure 6
<p>Simulated lidar profiles of the atmospheric scene described in the main body of the text. Shown are the (<b>a</b>) aerosol extinction coefficients, (<b>b</b>) aerosol backscatter coefficients, (<b>c</b>) lidar ratios, and (<b>d</b>) linear particle depolarization ratios (PDR) at 355 nm and 532 nm, respectively.</p>
Full article ">Figure 7
<p>Range-corrected signal (RCS) of the simulated case. P and S denote the parallel-polarization and cross-polarization channels, respectively. (<b>a</b>) show the RCSs at 355 nm and 386 nm. (<b>b</b>) show the RCSs at 532 nm and 607 nm.</p>
Full article ">Figure 8
<p>Relative errors of aerosol optical properties caused by trigger delay deviation from the true value. Shown are the relative errors of the (<b>a</b>,<b>e</b>) aerosol extinction coefficients, (<b>b</b>,<b>f</b>) aerosol backscatter coefficients, (<b>c</b>,<b>g</b>) lidar ratios, and (<b>d</b>,<b>h</b>) linear PDRs at 355 nm and 532 nm, respectively.</p>
Full article ">Figure 9
<p>Relative errors of aerosol optical properties caused by nonlinearity of the lidar system. Shown are the relative errors of the (<b>a</b>,<b>d</b>) aerosol backscatter coefficients, (<b>b</b>,<b>e</b>) lidar ratios, and (<b>c</b>,<b>f</b>) linear PDRs at 355 nm and 532 nm, respectively.</p>
Full article ">Figure 10
<p>Relative errors of linear VDR and linear PDR caused by polarization crosstalk. (<b>a</b>,<b>b</b>,<b>e</b>,<b>f</b>) the depolarization ratio is uncalibrated, (<b>c</b>,<b>d</b>,<b>g</b>,<b>h</b>) the depolarization ratio is calibrated.</p>
Full article ">Figure 11
<p>Relative errors of the aerosol optical properties caused by Raman crosstalk. Shown are the relative errors of the (<b>a</b>,<b>e</b>) aerosol extinction coefficients, (<b>b</b>,<b>f</b>) aerosol backscatter coefficients, (<b>c</b>,<b>g</b>) lidar ratios, and (<b>d</b>,<b>h</b>) linear PDRs at 355 nm and 532 nm, respectively.</p>
Full article ">Figure 12
<p>Diagram of the area-integrated ratio method that is used for defining the overlap score.</p>
Full article ">Figure 13
<p>Relative errors of the aerosol optical properties caused by temperature deviations. Shown are the relative errors of (<b>a</b>,<b>f</b>) aerosol extinction coefficients, (<b>b</b>,<b>g</b>) aerosol backscatter coefficients, (<b>c</b>,<b>h</b>) lidar ratios, (<b>d</b>,<b>i</b>) linear VDRs, and (<b>e</b>,<b>j</b>) linear PDRs at 355 nm and 532 nm, respectively.</p>
Full article ">Figure 14
<p>Relative errors of the aerosol optical properties caused by pressure deviations. The meaning of the <span class="html-italic">x</span>-axis titles (<b>a</b>–<b>j</b>) is the same as in <a href="#remotesensing-16-00207-f013" class="html-fig">Figure 13</a>.</p>
Full article ">Figure 15
<p>(<b>a</b>) curtain-plot of the range corrected signal acquired between 20:00 and 08:50 LT on 6 and 7 February 2020. Profiles of the RCS at (<b>b</b>) 355 nm, (<b>c</b>) 386 nm, and (<b>d</b>,<b>e</b>) are the profiles of the SNR corresponding to (<b>b</b>,<b>c</b>), respectively. The measurement time is between 06:00 and 06:30 LT on 7 February 2020; see the red box in (<b>a</b>). The dashed lines represent the SNR of 3. The temporal and spatial resolutions are 1 min and 15 m for (<b>a</b>) and 60 min and 60 m for (<b>b</b>–<b>e</b>), respectively.</p>
Full article ">Figure 16
<p>Aerosol optical properties between 06:00 and 06:30 LT on 7 February 2020. Shown are (<b>a</b>) aerosol extinction coefficients, (<b>b</b>) aerosol backscatter coefficients, (<b>c</b>) lidar ratios, (<b>d</b>) linear VDR, and (<b>e</b>) linear PDR. The temporal and spatial resolutions are 60 min and 15 m for the aerosol extinction coefficient, and 60 min and 60 m for the aerosol backscatter coefficients, lidar ratios, linear VDR and linear PDR, respectively.</p>
Full article ">Figure 17
<p>(<b>a</b>) curtain-plot of the range corrected signals acquired between 18:00 and 06:55 LT on 8 and 9 December 2019. The measurement time is between 18:30 and 19:00 LT on 8 December 2019; see the red box in (<b>a</b>). The meaning of symbols, lines, and colors (<b>b</b>–<b>e</b>) are the same as in <a href="#remotesensing-16-00207-f015" class="html-fig">Figure 15</a>.</p>
Full article ">Figure 18
<p>Aerosol optical properties between 18:30 and 19:00 LT on 8 December 2019. The meaning of symbols, lines, and colors (<b>a</b>–<b>e</b>) are the same as in <a href="#remotesensing-16-00207-f016" class="html-fig">Figure 16</a>.</p>
Full article ">Figure 19
<p>(<b>a</b>) curtain-plot of the range corrected signals on 5 February 2021. The measurement time is between 00:30 and 01:00 LT; see the red box in (<b>a</b>). The meaning of symbols, lines, and colors (<b>b</b>–<b>e</b>) are the same as in <a href="#remotesensing-16-00207-f015" class="html-fig">Figure 15</a>.</p>
Full article ">Figure 20
<p>Aerosol optical properties at 532 nm between 00:30 and 01:00 LT on 5 February 2021. The meaning of symbols, lines, and colors (<b>a</b>–<b>e</b>) are the same as in <a href="#remotesensing-16-00207-f016" class="html-fig">Figure 16</a>.</p>
Full article ">Figure 21
<p>(<b>a</b>) curtain-plot of the range corrected signals acquired between 00:00 and 21:40 LT on 16 March 2021. The measurement time is between 19:30 and 20:00 LT; see the red box in (<b>a</b>). The meaning of symbols, lines, and colors (<b>b</b>–<b>e</b>) are the same as in <a href="#remotesensing-16-00207-f015" class="html-fig">Figure 15</a>.</p>
Full article ">Figure 22
<p>Aerosol optical properties at 532 nm between 19:30 and 20:00 LT on 16 March 2021. The meaning of symbols, lines, and colors (<b>a</b>–<b>e</b>) are the same as in <a href="#remotesensing-16-00207-f016" class="html-fig">Figure 16</a>.</p>
Full article ">
2 pages, 20888 KiB  
Correction
Correction: Yang et al. Spatial Diffusion Waves of Human Activities: Evidence from Harmonized Nighttime Light Data during 1992–2018 in 234 Cities of China. Remote Sens. 2023, 15, 1426
by Jianxin Yang, Man Yuan, Shengbing Yang, Danxia Zhang, Yingge Wang, Daiyi Song, Yunze Dai, Yan Gao and Jian Gong
Remote Sens. 2024, 16(1), 206; https://doi.org/10.3390/rs16010206 - 4 Jan 2024
Viewed by 1047
Abstract
In the original publication [...] Full article
Show Figures

Figure 1

Figure 1
<p>The 234 sample cities with total population larger than 2 million in 2019 in China.</p>
Full article ">
14 pages, 4438 KiB  
Technical Note
Typhoon-Induced Extreme Sea Surface Temperature Drops in the Western North Pacific and the Impact of Extra Cooling Due to Precipitation
by Jia-Yi Lin, Hua Ho, Zhe-Wen Zheng, Yung-Cheng Tseng and Da-Guang Lu
Remote Sens. 2024, 16(1), 205; https://doi.org/10.3390/rs16010205 - 4 Jan 2024
Viewed by 2163
Abstract
Sea surface temperature (SST) responses have been perceived as crucial to consequential tropical cyclone (TC) intensity development. In addition to regular cooling responses, a few TCs could cause extreme SST drops (ESSTDs) (e.g., SST drops more than 6 °C) during their passage. Given [...] Read more.
Sea surface temperature (SST) responses have been perceived as crucial to consequential tropical cyclone (TC) intensity development. In addition to regular cooling responses, a few TCs could cause extreme SST drops (ESSTDs) (e.g., SST drops more than 6 °C) during their passage. Given the extreme temperature differences and the consequentially marked air–sea flux modulations, ESSTDs are intuitively supposed to play a serious role in modifying TC intensities. Nevertheless, the relationship between ESSTDs and consequential storm intensity changes remains unclear. In this study, satellite-observed microwave SST drops and the International Best Track Archive for Climate Stewardship TC data from 2001 to 2021 were used to elucidate the relationship between ESSTDs and the consequential TC intensity changes in the Western North Pacific typhoon season (July–October). Subsequently, the distributed characteristics of ESSTDs were systematically examined based on statistical analyses. Among them, Typhoon Kilo (2015) triggered an unexpected ESSTD behind its passage, according to existing theories. Numerical experiments based on the Regional Ocean Modeling System were carried out to explore the possible mechanisms that resulted in the ESSTD due to Kilo. The results indicate that heavy rainfall leads to additional SST cooling through the enhanced sensible heat flux leaving the surface layer in addition to the cooling from momentum-driven vertical mixing. This process enhanced the sensible heat flux leaving the sea surface since the temperature of the raindrops could be much colder than the SST in the tropical ocean, specifically under heavy rainfall and relatively less momentum entering the upper ocean during Kilo. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Comparison of satellite-based OISST and available in situ SSTs measured via all surface drifters passing the study area from 2001 to 2021 (3,093,567 samples). (<b>b</b>) Comparison for the periods of TC passages regardless of the season (12,479 samples). (<b>c</b>,<b>d</b>) As in (<b>a</b>,<b>b</b>) but representing the bias between the satellite-based OISST and drifters (satellite–drifter). The color bar indicates the number of each bin.</p>
Full article ">Figure 2
<p>Number distribution of TICs with individual strengths corresponding to all TC passages in the WNP from 2001 to 2021. The box area outlines the TICs that belong to ESSTDs.</p>
Full article ">Figure 3
<p>Comparison of TICs (unit: °C) and TCI changes in delta wind speed (unit: ms<sup>−1</sup>). The blue dots show the means of each interval. The red bars show the 95% confidence interval, and the 25th and 75th percentiles are marked by black asterisks.</p>
Full article ">Figure 4
<p>Ratios of all TICs (including ESSTDs) corresponding to open ocean (<b>left</b>) and shelf regions (<b>right</b>).</p>
Full article ">Figure 5
<p>Distribution of all TICs in the shelf region (green dots) and open ocean (blue dots). Yellow and red dots denote ESSTDs that occurred in the shelf region and open ocean, respectively. ESSTDs that occurred in the open ocean (red dots) were triggered by the typhoons listed in <a href="#remotesensing-16-00205-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 6
<p>Sea surface cooling responses (relative to 4 September) during the passage of Kilo (2015) via (<b>a</b>) satellite-based OISSTs and (<b>b</b>) ROMS simulation. Color shades denote SSTs (left color bar) (unit: °C). TC intensities are marked with color dots (right color bar). Black dots denote the central positions of Kilo.</p>
Full article ">Figure 7
<p>Daily accumulated precipitation in different composites. <b>Climatology</b> is the domain (20–30°N, 160–170°E) average in September climatology. <b>Typhoon</b> is the average precipitation composite calculated from all TCs passing through the same domain from 2001 to 2020. <b>Kilo</b> is the average precipitation along the track of Kilo in the domain. The red line in the blue box denotes the median, and the lower and upper boundaries denote the 25th percentile and 75th percentile, respectively. Whiskers above and below the box indicate the 75th percentile + 1.5 × IQR and 25th percentile − 1.5 × IQR, respectively, where IQR denotes the 75th percentile–25th percentile. The red plus signs denote the outliers.</p>
Full article ">Figure 8
<p>Model-simulated sea surface cooling responses (relative to 4 September 2015) during the passage of Kilo (2015) with corrected term due to <span class="html-italic">Q<sub>p</sub></span> cooling. Color shades denote SSTs (left color bar) (unit: °C). TC intensities are marked with color dots (right color bar). Black dot denotes the central positions of Kilo.</p>
Full article ">
22 pages, 7088 KiB  
Article
Transboundary Central African Protected Area Complexes Demonstrate Varied Effectiveness in Reducing Predicted Risk of Deforestation Attributed to Small-Scale Agriculture
by Katie P. Bernhard, Aurélie C. Shapiro, Rémi d’Annunzio and Joël Masimo Kabuanga
Remote Sens. 2024, 16(1), 204; https://doi.org/10.3390/rs16010204 - 4 Jan 2024
Cited by 1 | Viewed by 3083
Abstract
The forests of Central Africa constitute the continent’s largest continuous tract of forest, maintained in part by over 200 protected areas across six countries with varying levels of restriction and enforcement. Despite protection, these Central African forests are subject to a multitude of [...] Read more.
The forests of Central Africa constitute the continent’s largest continuous tract of forest, maintained in part by over 200 protected areas across six countries with varying levels of restriction and enforcement. Despite protection, these Central African forests are subject to a multitude of overlapping proximate and underlying drivers of deforestation and degradation, such as conversion to small-scale agriculture. This pilot study explored whether transboundary protected area complexes featuring mixed resource-use restriction categories are effective in reducing the predicted disturbance risk to intact forests attributed to small-scale agriculture. At two transboundary protected area complex sites in Central Africa, we used Google Earth Engine and a suite of earth observation (EO) data, including a dataset derived using a replicable, open-source methodology stemming from a regional collaboration, to predict the increased risk of deforestation and degradation of intact forests caused by small-scale agriculture. For each complex, we then statistically compared the predicted increased risk between protected and unprotected forests for a stratified random sample of 2 km sites (n = 4000). We found varied effectiveness of protected areas for reducing the predicted risk of deforestation and degradation to intact forests attributed to agriculture by both the site and category of protected areas within the complex. Our early results have implications for sustainable agriculture development, forest conservation, and protected areas management and provide a direction for future research into spatial planning. Spatial planning could optimize the configuration of protected area types within transboundary complexes to achieve both forest conservation and sustainable agricultural production outcomes. Full article
(This article belongs to the Special Issue Recent Progress in Earth Observation Data for Sustainable Development)
Show Figures

Figure 1

Figure 1
<p>Land cover in the six countries of the Central African subregion included in this study (Cameroon, Central African Republic, Democratic Republic of Congo, Republic of Congo, Gabon, Equatorial Guinea). Source: Authors. Data: WDPA, 2023 [<a href="#B22-remotesensing-16-00204" class="html-bibr">22</a>]; CAFI, 2022 [<a href="#B23-remotesensing-16-00204" class="html-bibr">23</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Conflict fatalities in the Central Africa subregion aggregated for 2010–2022. (<b>b</b>) Heatmap of overall conflict incidents in the Central Africa subregion aggregated for 2010–2022. Source: Authors. Data: ACLED, accessed 2022 [<a href="#B63-remotesensing-16-00204" class="html-bibr">63</a>].</p>
Full article ">Figure 3
<p>Workflow demonstrating the major procedures undertaken in this study [<a href="#B2-remotesensing-16-00204" class="html-bibr">2</a>].</p>
Full article ">Figure 4
<p>(<b>a</b>) Sangha Trinational Protected Areas Complex (STPAC) at the border of CMR, COG, and CAR, including three national parks (~705,000 ha), a forest reserve (~58,000 ha), and a special reserve (~339,000 ha). Source: Authors. Data: WDPA, 2023 [<a href="#B22-remotesensing-16-00204" class="html-bibr">22</a>]. (<b>b</b>) Bili-Uéré Protected Areas Complex (BUPAC) at the border of DRC and CAR, including both Bili-Uéré Hunting Reserve (~3.2 million ha) and Bomu Wildlife Reserve (~1.1 million ha). DRC official protected area categories are also illustrated. Source: Authors. Data: Pélissier et al., 2018 [<a href="#B73-remotesensing-16-00204" class="html-bibr">73</a>].</p>
Full article ">Figure 5
<p>Using the small-scale agriculture points [<a href="#B2-remotesensing-16-00204" class="html-bibr">2</a>] from the drivers of deforestation point layer (<b>a</b>), we then created the distance raster layer for small-scale agriculture for the BUPAC extent (<b>b</b>).</p>
Full article ">Figure 6
<p>Variable importance in random forest model used to predict increased risk to intact forest resulting from randomly generated small-scale agriculture points. All image stack variables, including small-scale agriculture and infrastructure, are the variables in the image stack for which the prediction error would increase the most if they were removed from the model.</p>
Full article ">Figure 7
<p>(<b>a</b>) Risk to intact forest output layer, displaying predicted increased risk resulting from randomly generated small-scale agriculture points, presenting a close-up view of (<b>b</b>) unprotected and (<b>c</b>) protected points.</p>
Full article ">Figure 8
<p>(<b>a</b>) Boxplots comparing mean risk values inside and outside the protected area complex boundaries. (<b>b</b>) Comparing distributions via kernel density estimation.</p>
Full article ">Figure 9
<p>Predicted increased deforestation by protected area category and country in transboundary system. The left includes all sites combined.</p>
Full article ">Figure 10
<p>K-means clustering of predicted increased risk to intact forests from small-scale agriculture. Light blue indicates clustering of the lowest relative levels of predicted increased risk, with dark green indicating clustering of the highest relative levels of predicted increased risk. * indicates that the maps for the two protected areas complexes are presented at different scales for the purposes of the visualization, as BUPAC is larger than STPAC.</p>
Full article ">
30 pages, 12524 KiB  
Article
A Novel ICESat-2 Signal Photon Extraction Method Based on Convolutional Neural Network
by Wenjun Qin, Yan Song, Yarong Zou, Haitian Zhu and Haiyan Guan
Remote Sens. 2024, 16(1), 203; https://doi.org/10.3390/rs16010203 - 4 Jan 2024
Cited by 7 | Viewed by 2312
Abstract
When it comes to the application of the photon data gathered by the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2), accurately removing noise is crucial. In particular, conventional denoising algorithms based on local density are susceptible to missing some signal photons when there [...] Read more.
When it comes to the application of the photon data gathered by the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2), accurately removing noise is crucial. In particular, conventional denoising algorithms based on local density are susceptible to missing some signal photons when there is uneven signal density distribution, as well as being susceptible to misclassifying noise photons near the signal photons; the application of deep learning remains untapped in this domain as well. To solve these problems, a method for extracting signal photons based on a GoogLeNet model fused with a Convolutional Block Attention Module (CBAM) is proposed. The network model can make good use of the distribution information of each photon’s neighborhood, and simultaneously extract signal photons with different photon densities to avoid misclassification of noise photons. The CBAM enhances the network to focus more on learning the crucial features and improves its discriminative ability. In the experiments, simulation photon data in different signal-to-noise ratios (SNR) levels are utilized to demonstrate the superiority and accuracy of the proposed method. The results from signal extraction using the proposed method in four experimental areas outperform the conventional methods, with overall accuracy exceeding 98%. In the real validation experiments, reference data from four experimental areas are collected, and the elevation of signal photons extracted by the proposed method is proven to be consistent with the reference elevation, with R2 exceeding 0.87. Both simulation and real validation experiments demonstrate that the proposed method is effective and accurate for extracting signal photons. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the experimental areas A, B, C and D.</p>
Full article ">Figure 2
<p>Method flow chart.</p>
Full article ">Figure 3
<p>Schematic diagram of the photon transformation process.</p>
Full article ">Figure 4
<p>Schematic diagram of the Inception module [<a href="#B39-remotesensing-16-00203" class="html-bibr">39</a>].</p>
Full article ">Figure 5
<p>Schematic diagram of the CAM module [<a href="#B35-remotesensing-16-00203" class="html-bibr">35</a>].</p>
Full article ">Figure 6
<p>Schematic diagram of the SAM module [<a href="#B35-remotesensing-16-00203" class="html-bibr">35</a>].</p>
Full article ">Figure 7
<p>The training process of network model in experimental areas A and B. (<b>a</b>) Change curve of accuracy; (<b>b</b>) change curve of loss.</p>
Full article ">Figure 8
<p>The training process of the network model in experimental area C. (<b>a</b>) Change curve of accuracy; (<b>b</b>) change curve of loss.</p>
Full article ">Figure 9
<p>Typical photon neighborhood of photon A, B, C and D and comparison of denoised results. (<b>a</b>) The denoised result of the proposed method (SNR = 80 dB); (<b>b</b>) the denoised result of DBSCAN (SNR = 80 dB); (<b>c</b>) validation.</p>
Full article ">Figure 10
<p>Photon images of typical photons A, B, C, and D in <a href="#remotesensing-16-00203-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 11
<p>Overall distribution of photon data in experimental area A. (<b>a</b>) Original photon data; (<b>b</b>) signal photon data.</p>
Full article ">Figure 12
<p>Comparison of the details of the gt2L track results in the experimental area A (SNR = 70 dB). (<b>a</b>) validation; (<b>b</b>) optical remote sensing image; (<b>c</b>) the proposed method; (<b>d</b>) DBSCAN; (<b>e</b>) OPTICS; (<b>f</b>) BED.</p>
Full article ">Figure 12 Cont.
<p>Comparison of the details of the gt2L track results in the experimental area A (SNR = 70 dB). (<b>a</b>) validation; (<b>b</b>) optical remote sensing image; (<b>c</b>) the proposed method; (<b>d</b>) DBSCAN; (<b>e</b>) OPTICS; (<b>f</b>) BED.</p>
Full article ">Figure 13
<p>Curves of change in four validation indicators with SNR in experimental area A. (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) OA; (<b>d</b>) Kappa.</p>
Full article ">Figure 13 Cont.
<p>Curves of change in four validation indicators with SNR in experimental area A. (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) OA; (<b>d</b>) Kappa.</p>
Full article ">Figure 14
<p>Overall distribution of photon data in experimental area B. (<b>a</b>) Original photon data; (<b>b</b>) signal photon data.</p>
Full article ">Figure 15
<p>Comparison of the details of the results of the gt2R track in experimental area B (SNR = 70 dB). (<b>a</b>) Validation; (<b>b</b>) optical remote sensing image; (<b>c</b>) the proposed method; (<b>d</b>) DBSCAN; (<b>e</b>) OPTICS; (<b>f</b>) BED.</p>
Full article ">Figure 16
<p>Curves of change in four validation indicators with SNR in experimental area B. (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) OA; (<b>d</b>) Kappa.</p>
Full article ">Figure 17
<p>Overall distribution of photon data in experimental area C. (<b>a</b>) Original photon data; (<b>b</b>) signal photon data.</p>
Full article ">Figure 18
<p>Comparison of the details of the gt3L track results in experimental area C (SNR = 80 dB). (<b>a</b>) Validation; (<b>b</b>) optical remote sensing image; (<b>c</b>) the proposed method; (<b>d</b>) DBSCAN; (<b>e</b>) OPTICS; (<b>f</b>) BED.</p>
Full article ">Figure 18 Cont.
<p>Comparison of the details of the gt3L track results in experimental area C (SNR = 80 dB). (<b>a</b>) Validation; (<b>b</b>) optical remote sensing image; (<b>c</b>) the proposed method; (<b>d</b>) DBSCAN; (<b>e</b>) OPTICS; (<b>f</b>) BED.</p>
Full article ">Figure 19
<p>Curves of change in four validation indicators with SNR in experimental area C. (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) OA; (<b>d</b>) Kappa.</p>
Full article ">Figure 20
<p>Overall distribution of photon data in experimental area D. (<b>a</b>) Original photon data; (<b>b</b>) signal photon data.</p>
Full article ">Figure 21
<p>Comparison of the details of the gt2R track results in experimental area D (SNR = 80 dB). (<b>a</b>) Validation; (<b>b</b>) optical remote sensing image; (<b>c</b>) the proposed method; (<b>d</b>) DBSCAN; (<b>e</b>) OPTICS; (<b>f</b>) BED.</p>
Full article ">Figure 21 Cont.
<p>Comparison of the details of the gt2R track results in experimental area D (SNR = 80 dB). (<b>a</b>) Validation; (<b>b</b>) optical remote sensing image; (<b>c</b>) the proposed method; (<b>d</b>) DBSCAN; (<b>e</b>) OPTICS; (<b>f</b>) BED.</p>
Full article ">Figure 22
<p>Curves of change in four validation indicators with SNR in experimental area D. (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) OA; (<b>d</b>) Kappa.</p>
Full article ">Figure 22 Cont.
<p>Curves of change in four validation indicators with SNR in experimental area D. (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) OA; (<b>d</b>) Kappa.</p>
Full article ">Figure 23
<p>Schematic of real reference verification in experimental area A. (<b>a</b>) Scatterplot of elevation and actual elevation of gt2R track signal photons in experimental area A; (<b>b</b>) track distribution diagram in experimental area A.</p>
Full article ">Figure 24
<p>Schematic of real reference verification in experimental area B. (<b>a</b>) Scatterplot of elevation and actual elevation of gt1R track signal photons in the experimental area B; (<b>b</b>) track distribution diagram in the experimental area B.</p>
Full article ">Figure 25
<p>Schematic of real reference verification in experimental area C. (<b>a</b>) Scatterplot of elevation and actual elevation of gt3L track signal photons in experimental area C; (<b>b</b>) track distribution diagram in experimental area C.</p>
Full article ">Figure 26
<p>Schematic of real reference verification in experimental area D. (<b>a</b>) Scatterplot of elevation and actual elevation of gt2R track signal photons in experimental area D; (<b>b</b>) track distribution diagram in experimental area D.</p>
Full article ">
20 pages, 7252 KiB  
Article
Seasonal Variability of Arctic Mid-Level Clouds and the Relationships with Sea Ice from 2003 to 2022: A Satellite Perspective
by Xi Wang, Jian Liu and Hui Liu
Remote Sens. 2024, 16(1), 202; https://doi.org/10.3390/rs16010202 - 3 Jan 2024
Cited by 2 | Viewed by 1645
Abstract
Mid-level clouds play a crucial role in the Arctic. Due to observational limitations, there is scarce research on the long-term evolution of Arctic mid-level clouds. From a satellite perspective, this study attempts to analyze the seasonal variations in Arctic mid-level clouds and explore [...] Read more.
Mid-level clouds play a crucial role in the Arctic. Due to observational limitations, there is scarce research on the long-term evolution of Arctic mid-level clouds. From a satellite perspective, this study attempts to analyze the seasonal variations in Arctic mid-level clouds and explore the possible relationships with sea ice changes using observations from the hyperspectral Atmospheric Infrared Sounder (AIRS) over the past two decades. For mid-level clouds of three layers (648, 548, and 447 hPa) involved in AIRS, high values of effective cloud fraction (ECF) occur in summer, and low values primarily occur in early spring, while the seasonal variations are different. The ECF anomalies are notably larger at 648 hPa than those at 548 and 447 hPa. Meanwhile, the ECF values at 648 hPa show a clear reduced seasonal variability for the regions north of 80°N, which has its minimum coefficient of variation (CV) during 2019 to 2020. The seasonal CV is relatively lower in the regions dominated by Greenland and sea areas with less sea ice coverage. Analysis indicates that the decline in mid-level ECF’s seasonal mean CV is closely correlated to the retreat of Arctic sea ice during September. Singular value decomposition (SVD) analysis reveals a reverse spatial pattern in the seasonal CV anomaly of mid-level clouds and leads anomaly. However, it is worth noting that this pattern varies by region. In the Greenland Sea and areas near the Canadian Arctic Archipelago, both CV and leads demonstrate negative (positive) anomalies, probably attributed to the stronger influence of atmospheric and oceanic circulations or the presence of land on the sea ice in these areas. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study areas of the Arctic region (60–90°N) with the meridional and latitude zones for the subsequent analysis. The colored base map represents the geographic elevations.</p>
Full article ">Figure 2
<p>Comparisons of (<b>a</b>) mean vertical CF (ECF) between AIRS and CC; (<b>b</b>) <span class="html-italic">Rc</span> between AIRS and CC from July 2006 to February 2011.</p>
Full article ">Figure 3
<p>Comparisons of spatial distributions of monthly mean mid-level CF (ECF) between AIRS and CC over the Arctic region (60°N to 82°N) during July 2006 to February 2011. (<b>a</b>–<b>c</b>) Represent ECF of AIRS at 648, 548, and 447 hPa, respectively; (<b>d</b>–<b>f</b>) represent CF of CC at 648, 548, and 447 hPa, respectively.</p>
Full article ">Figure 4
<p>Seasonal variations in <span class="html-italic">Rc</span> at mid-level (648, 548, and 447 hPa) between AIRS and CC over the Arctic region (60°N to 82°N) during July 2006 to February 2011. The error bar represents the standard deviation of each month.</p>
Full article ">Figure 5
<p>Monthly variations (dot-lines), anomalies (pink shadows for positive anomalies, blue shadows for negative anomalies), and extreme value envelopes (dotted lines) of AIRS mid-level ECF at (<b>a</b>,<b>b</b>) 648 hPa, (<b>c</b>,<b>d</b>) 548 hPa, and (<b>e</b>,<b>f</b>) 447 hPa for the ascending and descending orbits from January 2003 to December 2022.</p>
Full article ">Figure 6
<p>Monthly variations in AIRS ECF at mid-levels of (<b>a</b>,<b>b</b>) 648 hPa, (<b>c</b>,<b>d</b>) 548 hPa, and (<b>e</b>,<b>f</b>) 447 hPa for different Arctic latitude zones (60–65°N, 65–70°N, 70–75°N, 75–80°N, 80–85°N, 85–90°N) of ascending and descending orbits from January 2003 to December 2022.</p>
Full article ">Figure 7
<p>CV of AIRS ECF at mid-levels of (<b>a</b>,<b>b</b>) 648 hPa, (<b>c</b>,<b>d</b>) 548 hPa, and (<b>e</b>,<b>f</b>) 447 hPa for different Arctic latitude zones (60–65°N, 65–70°N, 70–75°N, 75–80°N, 80–85°N, 85–90°N) of ascending and descending orbits from 2003 to 2022.</p>
Full article ">Figure 8
<p>Monthly variations in AIRS ECF at mid-levels of (<b>a</b>,<b>b</b>) 648 hPa, (<b>c</b>,<b>d</b>) 548 hPa, and (<b>e</b>,<b>f</b>) 447 hPa for different Arctic meridional zones (180–120°W, 120–60°W, 60–0°W, 0–60°E, 60–120°E, 120–180°E) of ascending and descending orbits from January 2003 to December 2022.</p>
Full article ">Figure 9
<p>CV of AIRS ECF at mid-levels of (<b>a</b>,<b>b</b>) 648 hPa, (<b>c</b>,<b>d</b>) 548 hPa, and (<b>e</b>,<b>f</b>) 447 hPa for different Arctic meridional zones (180–120°W, 120–60°W, 60–0°W, 0–60°E, 60–120°E, 120–180°E) of ascending and descending orbits from 2003 to 2022.</p>
Full article ">Figure 10
<p>Correlations between the CV of AIRS ECF and the September SIC from 2003 to 2022 for the sea regions of the Arctic (north of 60°N). The thin solid and dotted lines represent the ECF CV at 648, 548, and 447 hPa of ascending and descending orbits, respectively. The thick gray solid and dashed lines represent the average CV of the mid-level ECF for the ascending and descending orbits, respectively. The blue bars represent the SIC in September of each year.</p>
Full article ">Figure 11
<p>(<b>a</b>,<b>b</b>,<b>d</b>,<b>e</b>,<b>g</b>,<b>h</b>,<b>j</b>,<b>k</b>) Spatial patterns of the heterogeneous correlations and (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) time series of the left and right fields of the SVD1 to SVD4 modes for the CV of AIRS mid-level ECF (ascending orbits) and leads numbers in November in the Arctic. L represents left field; R represents right field. The dotted area in panels (<b>a</b>,<b>b</b>,<b>d</b>,<b>e</b>,<b>g</b>,<b>h</b>,<b>j</b>,<b>k</b>) indicates correlations passing the 95% statistical significance level.</p>
Full article ">Figure 12
<p>As in <a href="#remotesensing-16-00202-f011" class="html-fig">Figure 11</a>, but for descending orbits.</p>
Full article ">
14 pages, 7477 KiB  
Technical Note
Elevation-Dependent Contribution of the Response and Sensitivity of Vegetation Greenness to Hydrothermal Conditions on the Grasslands of Tibet Plateau from 2000 to 2021
by Yatang Wu, Changliang Shao, Jing Zhang, Yiliang Liu, Han Li, Leichao Ma, Ming Li, Beibei Shen, Lulu Hou, Shiyang Chen, Dawei Xu, Xiaoping Xin and Xiaoni Liu
Remote Sens. 2024, 16(1), 201; https://doi.org/10.3390/rs16010201 - 3 Jan 2024
Cited by 2 | Viewed by 1626
Abstract
The interrelation between grassland vegetation greenness and hydrothermal conditions on the Tibetan Plateau demonstrates a significant correlation. However, understanding the spatial patterns and the degree of this correlation, especially in relation to minimum and maximum air temperatures across various vertical gradient zones of [...] Read more.
The interrelation between grassland vegetation greenness and hydrothermal conditions on the Tibetan Plateau demonstrates a significant correlation. However, understanding the spatial patterns and the degree of this correlation, especially in relation to minimum and maximum air temperatures across various vertical gradient zones of the Plateau, necessitates further examination. Utilizing the normalized difference phenology index (NDPI) and considering four distinct hydrothermal conditions (minimum, maximum, mean temperature, and precipitation) during the growing season, an analysis was conducted on the correlation of NDPI with hydrothermal conditions across plateau elevations from 2000 to 2021. Results indicate that the correlation between vegetation greenness and hydrothermal conditions on the Tibetan Plateau grasslands is spatially varied. There is a pronounced negative correlation of greenness to maximum temperature and precipitation in the northeastern plateau, while areas exhibit stronger positive correlations to mean temperature. Additionally, as elevation increases, the positive correlation and sensitivity of alpine grassland vegetation greenness to minimum temperature significantly intensify, contrary to the effects observed with maximum temperature. The correlations between greenness and mean temperature in relation to elevational changes primarily exhibit a unimodal pattern across the Tibetan Plateau. These findings emphasize that the correlation and sensitivity of grassland vegetation greenness to hydrothermal conditions are both elevation-dependent and spatially distinct. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical location (<b>a</b>), elevation (<b>b</b>),grassland type (<b>c</b>) of the study area.</p>
Full article ">Figure 2
<p>Mean value (<b>a</b>), stability (<b>b</b>), trends (<b>c</b>) and significance of the trends (<b>d</b>) of NDPI in the peak season over the period 2000–2021.</p>
Full article ">Figure 3
<p>Magnitude and trend in hydrothermal conditions on the Tibet plateau. The magnitude of T<sub>min</sub> (°C) (<b>a</b>) T<sub>max</sub> (°C) (<b>b</b>), T<sub>mean</sub> (°C) (<b>c</b>) and GSAP (mm) (<b>d</b>) over the period 2000–2021; the trend of T<sub>min</sub> (°C year<sup>−1</sup>) (<b>e</b>), T<sub>max</sub> (°C year<sup>−1</sup>) (<b>f</b>) T<sub>mean</sub> (°C year<sup>−1</sup>) (<b>g</b>) and GSAP (mm year<sup>−1</sup>) (<b>h</b>) over the period 2000–2021.</p>
Full article ">Figure 4
<p>The spatial patterns of the hydrothermal response and sensitivity of NDPI. The partial correlation between NDPI and T<sub>min</sub> (<b>a</b>), T<sub>max</sub> (<b>b</b>), T<sub>mean</sub> (<b>c</b>) and GSAP (<b>d</b>), respectively; the sensitivity of NDPI to T<sub>min</sub> (<b>e</b>) and T<sub>max</sub> (<b>f</b>), T<sub>mean</sub> (<b>g</b>) and GSAP (<b>h</b>), respectively.</p>
Full article ">Figure 5
<p>Area proportion of the partial correlation between NDPI and hydrothermal conditions for different grassland types.</p>
Full article ">Figure 6
<p>The response and sensitivity of NDPI to hydrothermal conditions in the peak season along the elevational gradient (at 10 m interval bins). In (<b>a</b>–<b>d</b>) the method of the partial correlation coefficient was used; in (<b>e</b>–<b>h</b>) the method of the regression coefficient was used.</p>
Full article ">
24 pages, 13401 KiB  
Article
A Spatial Downscaling Framework for SMAP Soil Moisture Based on Stacking Strategy
by Jiaxin Xu, Qiaomei Su, Xiaotao Li, Jianwei Ma, Wenlong Song, Lei Zhang and Xiaoye Su
Remote Sens. 2024, 16(1), 200; https://doi.org/10.3390/rs16010200 - 3 Jan 2024
Cited by 8 | Viewed by 2850
Abstract
Soil moisture (SM) data can provide guidance for decision-makers in fields such as drought monitoring and irrigation management. Soil Moisture Active Passive (SMAP) satellite offers sufficient spatial resolution for global-scale applications, but its utility is limited in regional areas due to its lower [...] Read more.
Soil moisture (SM) data can provide guidance for decision-makers in fields such as drought monitoring and irrigation management. Soil Moisture Active Passive (SMAP) satellite offers sufficient spatial resolution for global-scale applications, but its utility is limited in regional areas due to its lower spatial resolution. To address this issue, this study proposed a downscaling framework based on the Stacking strategy. The framework integrated extreme gradient boosting (XGBoost), light gradient boosting machine (LightGBM), and categorical boosting (CatBoost) to generate 1 km resolution SM data using 15 high-resolution factors derived from multi-source datasets. In particular, to test the influence of terrain partitioning on downscaling results, Anhui Province, which has diverse terrain features, was selected as the study area. The results indicated that the performance of the three base models varied, and the developed Stacking strategy maximized the potential of each model with encouraging downscaling results. Specifically, we found that: (1) The Stacking model achieved the highest accuracy in all regions, and the performance order of the base models was: XGBoost > CatBoost > LightGBM. (2) Compared with the measured SM at 87 sites, the downscaled SM outperformed other 1 km SM products as well as the downscaled SM without partitioning, with an average ubRMSE of 0.040 m3/m3. (3) The downscaled SM responded positively to rainfall events and mitigated the systematic bias of SMAP. It also preserved the spatial trend of the original SMAP, with higher levels in the humid region and relatively lower levels in the semi-humid region. Overall, this study provided a new strategy for soil moisture downscaling and revealed some interesting findings related to the effectiveness of the Stacking model and the impact of terrain partitioning on downscaling accuracy. Full article
(This article belongs to the Special Issue Satellite Soil Moisture Estimation, Assessment, and Applications)
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Schematic of the Stacking strategy.</p>
Full article ">Figure 3
<p>Schematic of the proposed downscaling framework.</p>
Full article ">Figure 4
<p>Performance of XGBoost, LightGBM, CatBoost, and Stacking on the test set.</p>
Full article ">Figure 5
<p>Spatial distribution of (<b>a</b>) R and (<b>b</b>) ubRMSE between downscaled SM and measured SM at 87 SM sites from 1 April to 1 November 2019.</p>
Full article ">Figure 6
<p>The relationships between measured SM and downscaled SM. (<b>a</b>) HB, (<b>b</b>) JH, (<b>c</b>) WX, (<b>d</b>) YJ, and (<b>e</b>) WN. The color of the points represents the bias value. n represents the number of sites, and sites with a correlation coefficient less than 0 are excluded from the figure.</p>
Full article ">Figure 7
<p>Comparison of all SM products with measured SM. SMAP L4-SM denotes the original 9 km SMAP product. SMCI1.0 refers to the 1 km SM product of Shangguan et al. SMAP D-SM is the 1 km SM product released by SMAP in March 2023. Downscaled SM is the 1 km downscaling results of this study. Downscaled SM(WP) indicates the 1 km downscaling results without partition modeling.</p>
Full article ">Figure 8
<p>Time series comparisons of SMCI1.0, SMAP D-SM, SMAP L4-SM, downscaled SM, downscaled SM (WP), measured SM, and precipitation data at 11 selected SM sites. Note: 2019-10-01 is not displayed because only one site had valid measured SM data on that day.</p>
Full article ">Figure 8 Cont.
<p>Time series comparisons of SMCI1.0, SMAP D-SM, SMAP L4-SM, downscaled SM, downscaled SM (WP), measured SM, and precipitation data at 11 selected SM sites. Note: 2019-10-01 is not displayed because only one site had valid measured SM data on that day.</p>
Full article ">Figure 8 Cont.
<p>Time series comparisons of SMCI1.0, SMAP D-SM, SMAP L4-SM, downscaled SM, downscaled SM (WP), measured SM, and precipitation data at 11 selected SM sites. Note: 2019-10-01 is not displayed because only one site had valid measured SM data on that day.</p>
Full article ">Figure 8 Cont.
<p>Time series comparisons of SMCI1.0, SMAP D-SM, SMAP L4-SM, downscaled SM, downscaled SM (WP), measured SM, and precipitation data at 11 selected SM sites. Note: 2019-10-01 is not displayed because only one site had valid measured SM data on that day.</p>
Full article ">Figure 9
<p>Mean distribution maps from April to November 2019 for SMAP L4-SM, downscaled SM, precipitation, LST, and distribution maps for DEM and clay. Note that water bodies are not excluded from the plots.</p>
Full article ">Figure 10
<p>SHAP values and feature importance of Stacking models in different regions: (<b>a</b>) HB, (<b>b</b>) JH, (<b>c</b>) WX, (<b>d</b>) YJ, and (<b>e</b>) WN. LC_1.0, LC_2.0, LC_3.0, and LC_4.0 are new features generated by the one-hot encoding of LC features. (<b>f</b>) Feature importance plot obtained by averaging the absolute values of SHAP values; note that the new features derived from one-hot encoding of LC have been summed up as LC.</p>
Full article ">Figure 11
<p>Co-interaction of LST and DEM on SM in the JH and WX regions. (<b>a</b>) JH, (<b>b</b>) WX.</p>
Full article ">
15 pages, 12461 KiB  
Article
Experimental Analysis of Deep-Sea AUV Based on Multi-Sensor Integrated Navigation and Positioning
by Yixu Liu, Yongfu Sun, Baogang Li, Xiangxin Wang and Lei Yang
Remote Sens. 2024, 16(1), 199; https://doi.org/10.3390/rs16010199 - 3 Jan 2024
Cited by 6 | Viewed by 2537
Abstract
The operation of underwater vehicles in deep waters is a very challenging task. The use of AUVs (Autonomous Underwater Vehicles) is the preferred option for underwater exploration activities. They can be autonomously navigated and controlled in real time underwater, which is only possible [...] Read more.
The operation of underwater vehicles in deep waters is a very challenging task. The use of AUVs (Autonomous Underwater Vehicles) is the preferred option for underwater exploration activities. They can be autonomously navigated and controlled in real time underwater, which is only possible with precise spatio-temporal information. Navigation and positioning systems based on LBL (Long-Baseline) or USBL (Ultra-Short-Baseline) systems have their own characteristics, so the choice of system is based on the specific application scenario. However, comparative experiments on AUV navigation and positioning under both systems are rarely conducted, especially in the deep sea. This study describes navigation and positioning experiments on AUVs in deep-sea scenarios and compares the accuracy of the USBL and LBL/SINS (Strap-Down Inertial Navigation System)/DVL (Doppler Velocity Log) modes. In practice, the accuracy of the USBL positioning mode is higher when the AUV is within a 60° observation range below the ship; when the AUV is far away from the ship, the positioning accuracy decreases with increasing range and observation angle, i.e., the positioning error reaches 80 m at 4000 m depth. The navigational accuracy inside and outside the datum array is high when using the LBL/SINS/DVL mode; if the AUV is far from the datum array when climbing to the surface, the LBL cannot provide accurate position calibration while the DVL fails, resulting in large deviations in the SINS results. In summary, the use of multi-sensor combination navigation schemes is beneficial, and accurate position information acquisition should be based on the demand and cost, while other factors should also be comprehensively considered; this paper proposes the use of the LBL/SINS/DVL system scheme. Full article
Show Figures

Figure 1

Figure 1
<p>The principle of navigation and positioning of the AUV based on two models. (The pink and orange arrows indicate the signal transmission and return.)</p>
Full article ">Figure 2
<p>Block diagram of shallow coupling mode.</p>
Full article ">Figure 3
<p>The navigation and positioning mode conversion of AUV.</p>
Full article ">Figure 4
<p>Situation at the site of the AUV deployment operation.</p>
Full article ">Figure 5
<p>SVP and CTD.</p>
Full article ">Figure 6
<p>Array of four datum points.</p>
Full article ">Figure 7
<p>Attitude changes during AUV diving.</p>
Full article ">Figure 8
<p>Forward velocity during AUV diving.</p>
Full article ">Figure 9
<p>Depth and altitude information during AUV diving.</p>
Full article ">Figure 10
<p>AUV trajectory from “Mode I”. (<b>a</b>) Viewpoint 1; (<b>b</b>) Viewpoint 2.</p>
Full article ">Figure 11
<p>AUV trajectory in both modes.</p>
Full article ">Figure 12
<p>AUV trajectory in both modes (data do not include AUV climb phase). (<b>a</b>) Front view; (<b>b</b>) top view.</p>
Full article ">Figure 13
<p>Part of the near-bottom trajectory of an AUV.</p>
Full article ">
18 pages, 5783 KiB  
Article
Performance Assessment of a High-Frequency Radar Network for Detecting Surface Currents in the Pearl River Estuary
by Langfeng Zhu, Tianyi Lu, Fan Yang, Chunlei Wei and Jun Wei
Remote Sens. 2024, 16(1), 198; https://doi.org/10.3390/rs16010198 - 3 Jan 2024
Cited by 2 | Viewed by 1808
Abstract
The performance of a high-frequency (HF) radar network situated within the Pearl River Estuary from 17 July to 13 August 2022 is described via a comparison with seven acoustic Doppler current profilers (ADCPs). The radar network consists of six OSMAR-S100 compact HF radars, [...] Read more.
The performance of a high-frequency (HF) radar network situated within the Pearl River Estuary from 17 July to 13 August 2022 is described via a comparison with seven acoustic Doppler current profilers (ADCPs). The radar network consists of six OSMAR-S100 compact HF radars, with a transmitting frequency of 13–16 MHz and a direction-finding technique. Both the radial currents and vector velocities showed good agreement with the ADCP results (coefficient of determination r2: 0.42–0.78; RMS difference of radials: 11–21.6 cm s1; bearing offset Δθ: 4.8°16.1°; complex correlation coefficient γ: 0.62–0.96; and phase angle α: −24.3°17.8°). For these radars, the Δθ values are not constant but vary with azimuthal angles. The relative positions between the HF radar and ADCPs, as well as factors such as the presence of island terrain obstructing the signal, significantly influence the errors. The results of spectral analysis also demonstrate a high level of consistency and the capability of HF radar to capture diurnal and semidiurnal tidal frequencies. The tidal characteristics and the Empirical Orthogonal Function (EOF) results measured by the HF radars also resemble the ADCPs and align with the characteristics of the estuarine current field. Full article
Show Figures

Figure 1

Figure 1
<p>Study area showing the PRE and adjacent sea. Circular sectors show the maximum radar ranges of single stations. Radar sites are marked by black dots. ADCPs are marked by blue triangles.</p>
Full article ">Figure 2
<p>(<b>a</b>) Timelines of ADCP data availability. (<b>b</b>–<b>g</b>) Time series of single radar coverage, defined as the number of cells returning data each hour, for the six HF radars.</p>
Full article ">Figure 3
<p>Comparison of (<b>a</b>) hourly and (<b>b</b>) 36 h lowpass-filtered radar and ADCP radial current time series at mooring R1 for the HESD site. (<b>c</b>) Scatterplot of V<sub>adcp</sub> vs. V<sub>radar</sub> from (<b>a</b>).</p>
Full article ">Figure 4
<p>(<b>a</b>) Map shows radar (HESD, bold black dot) and R1 ADCP (black triangle), with HF radar measurement data points (black dots). The red triangle shows the location with the highest r<sup>2</sup> between V<sub>adcp</sub> and V<sub>radar</sub>. The green points show the locations of the r<sup>2</sup> profiles in (<b>b</b>–<b>d</b>). Profiles of r<sup>2</sup> between V<sub>adcp</sub> and V<sub>radar</sub> are shown along ranges of (<b>b</b>) R_R1-0.25 km, (<b>c</b>) R_R1, and (<b>d</b>) R_R1 + 0.25 km.</p>
Full article ">Figure 5
<p>Same as <a href="#remotesensing-16-00198-f004" class="html-fig">Figure 4</a>. However, for the GUIS radar and R2 ADCP comparison.</p>
Full article ">Figure 6
<p>(<b>a</b>) Power spectra of V<sub>adcp</sub> from R1 and V<sub>radar</sub> from HESD. (<b>b</b>) As in (<b>a</b>), but for R7 and WSDL. Same for (<b>c</b>) R2 and HEQI, (<b>d</b>) R2 and GUIS, (<b>e</b>) R2 and MWDA and (<b>f</b>) R1 and DGDA. The K1 tidal, O1 tidal, M2 tidal, and inertial frequencies are shown in all panels.</p>
Full article ">Figure 7
<p>(<b>a</b>) Stick plots for ADCP and radar and comparison of the rotary power spectra calculated with radar and ADCP velocity data at site R1. The two <span class="html-italic">y</span>-axis in (<b>b</b>,<b>c</b>) are the average spectral energy densities associated with the (<b>b</b>) clockwise and (<b>c</b>) counterclockwise components of the velocity hodograph ellipse.</p>
Full article ">Figure 8
<p>Comparisons of the ADCP (black) and radar (blue) M2 tidal ellipses.</p>
Full article ">Figure 9
<p>Comparisons of the ADCP (black) and radar (blue) EOF ellipses for (<b>a</b>) total velocity and (<b>b</b>) residual currents.</p>
Full article ">Figure 10
<p>Comparisons of the first EOF mode for ADCP (black arrow) and radar (blue arrow) total velocities.</p>
Full article ">
30 pages, 23312 KiB  
Article
A Multisensory Analysis of the Moisture Course of the Cave of Altamira (Spain): Implications for Its Conservation
by Vicente Bayarri, Alfredo Prada, Francisco García, Carmen De Las Heras and Pilar Fatás
Remote Sens. 2024, 16(1), 197; https://doi.org/10.3390/rs16010197 - 3 Jan 2024
Cited by 7 | Viewed by 2483
Abstract
This paper addresses the conservation problems of the cave of Altamira, a UNESCO World Heritage Site in Santillana del Mar, Cantabria, Spain, due to the effects of moisture and water inside the cave. The study focuses on the description of methods for estimating [...] Read more.
This paper addresses the conservation problems of the cave of Altamira, a UNESCO World Heritage Site in Santillana del Mar, Cantabria, Spain, due to the effects of moisture and water inside the cave. The study focuses on the description of methods for estimating the trajectory and zones of humidity from the external environment to its eventual dripping on valuable cave paintings. To achieve this objective, several multisensor remote sensing techniques, both aerial and terrestrial, such as 3D laser scanning, a 2D ground penetrating radar, photogrammetry with unmanned aerial vehicles, and high-resolution terrestrial techniques are employed. These tools allow a detailed spatial analysis of the moisture and water in the cave. The paper highlights the importance of the dolomitic layer in the cave and how it influences the preservation of the ceiling, which varies according to its position, whether it is sealed with calcium carbonate, actively dripping, or not dripping. In addition, the crucial role of the central fracture and the areas of direct water infiltration in this process is examined. This research aids in understanding and conserving the site. It offers a novel approach to water-induced deterioration in rock art for professionals and researchers. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Cultural Heritage Research II)
Show Figures

Figure 1

Figure 1
<p>Location and map of the surroundings of cave of Altamira where the outline of the cavity is projected onto the surface (shown by a black line). It is also shown highlighting the orthoimage of the ceiling of the Polychrome Hall where the cave paintings are located.</p>
Full article ">Figure 2
<p>Image showing the microcorrosion of the supporting rock and the craquele of the pigment. Note that white areas are produced by light glints so no information is available.</p>
Full article ">Figure 3
<p>General workflow scheme followed in this study (adapted from [<a href="#B41-remotesensing-16-00197" class="html-bibr">41</a>,<a href="#B42-remotesensing-16-00197" class="html-bibr">42</a>]). The black hexagons indicate the integrated models, the gray hexagons the analysis performed on the models and the colored hexagons the analysis performed from the previous ones.</p>
Full article ">Figure 4
<p>(<b>a</b>) Overlap of exterior sinks, interior drip points, position of the central fracture and position of the GPR profiles projected on the ceiling of the Polychrome Hall that were carried out with the 400 MHz and 900 MHz antennas. The green rectangle shows the position of (<b>b</b>) Image with detail of the central fracture sealed with mortar framed in green in (<b>a</b>).</p>
Full article ">Figure 5
<p>(<b>a</b>) Location of the ALT-1 active drip zone at the hind legs of the great bison figure (marked in a green rectangle). (<b>b</b>) Image with detail of the hind leg area of the great bison showing active drip points (indicated with blue polygons) and streams (indicated with blue lines). The thicker the blue lines (streams) the higher the hierarchy in the ALT-1 active drip zone.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Location of the ALT-1 active drip zone at the hind legs of the great bison figure (marked in a green rectangle). (<b>b</b>) Image with detail of the hind leg area of the great bison showing active drip points (indicated with blue polygons) and streams (indicated with blue lines). The thicker the blue lines (streams) the higher the hierarchy in the ALT-1 active drip zone.</p>
Full article ">Figure 6
<p>Detailed image of a sheet of water on Polychrome ceiling showing the surface tension of the water.</p>
Full article ">Figure 7
<p>(<b>a</b>) Basins, sinks and classified streams according to the Strahler method [<a href="#B61-remotesensing-16-00197" class="html-bibr">61</a>] in the overlying layer of the Polychrome Hall; (<b>b</b>) Superimposition of the above results on the orthoimage of the Polychrome ceiling.</p>
Full article ">Figure 8
<p>Example of GPR attributes from Hilbert transform on reflection profiles D6-T1, D6-T2, D6-T3 and D6-T4 recorded with the 400 MHz and 900 MHz antennas air-coupled to the ceiling of the Polychrome Hall. (<b>a</b>) Processed reflection profiles, and after the application of the following attributes: (<b>b</b>) instantaneous magnitude; (<b>c</b>) instantaneous phase; (<b>d</b>) instantaneous frequency. The active drip zone ALT-1 (indicated by the blue arrow) and the central fracture (indicated by the red arrow) are shown. The correlation of these reflection profiles with the stratigraphic layers of the Polychrome Hall overlying layer is also shown (the thicknesses of each recorded layer are indicated by red vertical arrows).</p>
Full article ">Figure 9
<p>(<b>a</b>) Superimposition of basin, streams, drip points on orthoimage to check washed calculated points with real ones; (<b>b</b>) drip point located in the hump area or central part of the bison.</p>
Full article ">Figure 10
<p>(<b>a</b>) General view of the 3D model of basins in Polychrome Hall ceiling showing the basins, streams and drip points; (<b>b</b>) Detail view of the model shown in <a href="#remotesensing-16-00197-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 11
<p>(<b>a</b>) Cross section showing the main vertical discontinuities in the overlying Polychrome Hall layer obtained by precise georeferencing of the inner reflection profiles D6-T1, D6-T2, D6-T3 and D6-T4 and the outer reflection profile T106; (<b>b</b>) Cross section with the instantaneous magnitude attribute derived from these refraction profiles shows the main areas of moisture/water infiltration or water pathways in the overlying Polychrome Hall layer. The central fracture can be seen to run from the lapiés zone (fissured layer) to the basal surface of the Polychrome layer (indicated by the red dashed line). The surface sinks coinciding with or close to the layout of these reflection profiles have been plotted (indicated by ochre triangles). Also shown are the signal attenuation zone with the 400 MHz antenna and the correlation of the reflection profiles with the stratigraphic units of the overlying layer of the Polychrome Hall.</p>
Full article ">Figure 11 Cont.
<p>(<b>a</b>) Cross section showing the main vertical discontinuities in the overlying Polychrome Hall layer obtained by precise georeferencing of the inner reflection profiles D6-T1, D6-T2, D6-T3 and D6-T4 and the outer reflection profile T106; (<b>b</b>) Cross section with the instantaneous magnitude attribute derived from these refraction profiles shows the main areas of moisture/water infiltration or water pathways in the overlying Polychrome Hall layer. The central fracture can be seen to run from the lapiés zone (fissured layer) to the basal surface of the Polychrome layer (indicated by the red dashed line). The surface sinks coinciding with or close to the layout of these reflection profiles have been plotted (indicated by ochre triangles). Also shown are the signal attenuation zone with the 400 MHz antenna and the correlation of the reflection profiles with the stratigraphic units of the overlying layer of the Polychrome Hall.</p>
Full article ">
20 pages, 4338 KiB  
Article
Cross-Modal Retrieval and Semantic Refinement for Remote Sensing Image Captioning
by Zhengxin Li, Wenzhe Zhao, Xuanyi Du, Guangyao Zhou and Songlin Zhang
Remote Sens. 2024, 16(1), 196; https://doi.org/10.3390/rs16010196 - 3 Jan 2024
Cited by 6 | Viewed by 3109
Abstract
Two-stage remote sensing image captioning (RSIC) methods have achieved promising results by incorporating additional pre-trained remote sensing tasks to extract supplementary information and improve caption quality. However, these methods face limitations in semantic comprehension, as pre-trained detectors/classifiers are constrained by predefined labels, leading [...] Read more.
Two-stage remote sensing image captioning (RSIC) methods have achieved promising results by incorporating additional pre-trained remote sensing tasks to extract supplementary information and improve caption quality. However, these methods face limitations in semantic comprehension, as pre-trained detectors/classifiers are constrained by predefined labels, leading to an oversight of the intricate and diverse details present in remote sensing images (RSIs). Additionally, the handling of auxiliary remote sensing tasks separately can introduce challenges in ensuring seamless integration and alignment with the captioning process. To address these problems, we propose a novel cross-modal retrieval and semantic refinement (CRSR) RSIC method. Specifically, we employ a cross-modal retrieval model to retrieve relevant sentences of each image. The words in these retrieved sentences are then considered as primary semantic information, providing valuable supplementary information for the captioning process. To further enhance the quality of the captions, we introduce a semantic refinement module that refines the primary semantic information, which helps to filter out misleading information and emphasize visually salient semantic information. A Transformer Mapper network is introduced to expand the representation of image features beyond the retrieved supplementary information with learnable queries. Both the refined semantic tokens and visual features are integrated and fed into a cross-modal decoder for caption generation. Through extensive experiments, we demonstrate the superiority of our CRSR method over existing state-of-the-art approaches on the RSICD, the UCM-Captions, and the Sydney-Captions datasets Full article
(This article belongs to the Special Issue Deep Learning in Optical Satellite Images)
Show Figures

Figure 1

Figure 1
<p>The overall structure of our CRSR model. The image features and text features are extracted by the CLIP image encoder and text encoder from the input image and sentence pool, respectively. Image features are transformed into a sequence of visual tokens through Transformer Mapper network with learnable queries. Based on the cross-modal retrieval, the retrieved relevant sentences are separated as a series of words and fed into the semantic refinement module to filter out irrelevant words. Finally, the obtained visual tokens with query tokens and semantic tokens are both fed into a cross-modal decoder to generate the corresponding image captions.</p>
Full article ">Figure 2
<p>The structure of the <span class="html-italic">i</span>th cross-modal transformer block of the decoding module. The “M-H Att” denotes the multi-head attention layer. The “Input Embed” denotes the input of the <span class="html-italic">i</span>th block.</p>
Full article ">Figure 3
<p>Captioning results of the baseline model and our proposed CRSR method. The “GT” denotes the ground truth captions. The “base” denotes the description generated by the baseline model. “ours” denotes the description generated by our CRSR method. “retr” denotes the retrieved words from the sentence pool, while “miss” denotes the overlooked words during retrieval. Blue words denote the scene uniquely captured by our model compared with baseline model. Red words denote the incorrectly generated words in description and retrieved words.</p>
Full article ">Figure 4
<p>Visualized attention matrix between the semantic tokens and the captioning of the input images.</p>
Full article ">
20 pages, 9487 KiB  
Article
Compound-Gaussian Model with Nakagami-Distributed Textures for High-Resolution Sea Clutter at Medium/High Grazing Angles
by Guanbao Yang, Xiaojun Zhang, Pengjia Zou and Penglang Shui
Remote Sens. 2024, 16(1), 195; https://doi.org/10.3390/rs16010195 - 2 Jan 2024
Cited by 4 | Viewed by 1569
Abstract
In this paper, a compound-Gaussian model (CGM) with the Nakagami-distributed textures (CGNG) is proposed to model sea clutter at medium/high grazing angles. The corresponding amplitude distributions are referred to as the CGNG distributions. The analysis of measured data shows that the CGNG distributions [...] Read more.
In this paper, a compound-Gaussian model (CGM) with the Nakagami-distributed textures (CGNG) is proposed to model sea clutter at medium/high grazing angles. The corresponding amplitude distributions are referred to as the CGNG distributions. The analysis of measured data shows that the CGNG distributions can provide better goodness-of-the-fit to sea clutter at medium/high grazing angles than the four types of commonly used biparametric distributions. As a new type of amplitude distribution, its parameter estimation is important for modelling sea clutter. The estimators from the method of moments (MoM) and the [zlog(z)] estimator from the method of generalized moments are first given for the CGNG distributions. However, these estimators are sensitive to sporadic outliers of large amplitude in the data. As the second contribution of the paper, outlier-robust tri-percentile estimators of the CGNG distributions are proposed. Moreover, experimental results using simulated and measured sea clutter data are reported to show the suitability of the CGNG amplitude distributions and outlier-robustness of the proposed tri-percentile estimators. Full article
(This article belongs to the Special Issue Radar Signal Processing and Imaging for Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>CGNG distributions with different values of <span class="html-italic">λ</span>: (<b>a</b>) main parts; (<b>b</b>) tail parts.</p>
Full article ">Figure 2
<p>Comparison of the tails of the five biparametric distributions when the parameters of the CGNG distributions take different values: (<b>a</b>) <span class="html-italic">b</span> = 1, <span class="html-italic">λ</span> = 0.1; (<b>b</b>) <span class="html-italic">b</span> = 1, <span class="html-italic">λ</span> = 0.5; (<b>c</b>) <span class="html-italic">b</span> = 1, <span class="html-italic">λ</span> = 1; (<b>d</b>) <span class="html-italic">b</span> = 1, <span class="html-italic">λ</span> = 2.</p>
Full article ">Figure 3
<p>(<b>a</b>) The ratio as a function of <span class="html-italic">λ</span>. (<b>b</b>) Derivative of the inverse function.</p>
Full article ">Figure 4
<p>Empirically optimal curve of (<span class="html-italic">α</span>, <span class="html-italic">β</span>) with different values of sample size <span class="html-italic">N</span>: (<b>a</b>) <span class="html-italic">N</span> = 1000; (<b>b</b>) <span class="html-italic">N</span> = 3000; (<b>c</b>) <span class="html-italic">N</span> = 5000; (<b>d</b>) <span class="html-italic">N</span> = 10,000.</p>
Full article ">Figure 5
<p>Empirically optimal curve of <span class="html-italic">γ</span>.</p>
Full article ">Figure 6
<p>Schematic diagram of medium/high grazing angle data collection.</p>
Full article ">Figure 7
<p>(<b>a</b>) Power map of airborne Ku-band data (dB). (<b>b</b>) Region division in terms of the grazing angle and clutter map cell segmentation in each region where one net denotes a clutter map cell.</p>
Full article ">Figure 8
<p>Comparison of the performance of the five distributions in the two cases: (<b>a</b>) PDFs comparison in the case 1; (<b>b</b>) CDFs comparison in the case 1; (<b>c</b>) PDFs comparison in the case 2; (<b>d</b>) CDFs comparison in the case 2.</p>
Full article ">Figure 9
<p>RRMSEs of the estimated parameters with different values of <span class="html-italic">β</span> in the absence of outliers: (<b>a</b>) RRMSE of <span class="html-italic">λ</span>; (<b>b</b>) RRMSE of <span class="html-italic">b</span>.</p>
Full article ">Figure 10
<p>RRMSE comparison of the four estimators of the CGNG distributions in the absence of outliers: (<b>a</b>) RRMSE of <span class="html-italic">λ</span>; (<b>b</b>) RRMSE of <span class="html-italic">b</span>.</p>
Full article ">Figure 11
<p>RRMSE comparison of the four estimators of the CGNG distributions in the presence of outliers: (<b>a</b>) RRMSE of <span class="html-italic">λ</span>; (<b>b</b>) RRMSE of <span class="html-italic">b</span>.</p>
Full article ">Figure 12
<p>Performance comparison of the tri-percentile estimator (<span class="html-italic">β</span> = 0.85), the MoM (<span class="html-italic">n</span> = 1) estimator, and [zlog(z)] estimator on the measured data: (<b>a</b>) power map of the measured data (dB); (<b>b</b>) fitting result in region A; (<b>c</b>) fitting result in region B; (<b>d</b>) fitting result in region C.</p>
Full article ">
17 pages, 1469 KiB  
Article
Multi-Satellite Imaging Task Planning for Large Regional Coverage: A Heuristic Algorithm Based on Triple Grids Method
by Feng Li, Qiuhua Wan, Feifei Wen, Yongkui Zou, Qien He, Da Li and Xing Zhong
Remote Sens. 2024, 16(1), 194; https://doi.org/10.3390/rs16010194 - 2 Jan 2024
Cited by 3 | Viewed by 2008
Abstract
Over the past few decades, there has been a significant increase in the number of Earth observation satellites, and the area of ground targets requiring observation has also been expanding. To effectively utilize the capabilities of these satellites and capture larger areas of [...] Read more.
Over the past few decades, there has been a significant increase in the number of Earth observation satellites, and the area of ground targets requiring observation has also been expanding. To effectively utilize the capabilities of these satellites and capture larger areas of ground targets, it has become essential to plan imaging tasks for large regional coverage using multiple satellites. First, we establish a 0-1 integer programming model to accurately describe the problem and analyze the challenges associated with solving the model. Second, we propose a heuristic algorithm based on the triple grids method. This approach utilizes a generated grid to create fewer candidate strips, a calculation grid to determine the effective coverage area more accurately, and a refined grid to solve the issue of repeated coverage of strips. Furthermore, we employ an approximation algorithm to further improve the solutions obtained from the heuristic algorithm. By comparing the proposed method to the traditional greedy heuristic algorithm and three evolutionary algorithms, the results show that our method has better performance in terms of coverage and efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of point targets and regional targets.</p>
Full article ">Figure 2
<p>Flowchart of multi-satellite imaging task planning for large regional coverage.</p>
Full article ">Figure 3
<p>Schematic diagram of regional discretization.</p>
Full article ">Figure 4
<p>Schematic diagram of generating strips based on grid.</p>
Full article ">Figure 5
<p>Schematic diagram of generated grid and calculation grid.</p>
Full article ">Figure 6
<p>Schematic diagram of joint cover.</p>
Full article ">Figure 7
<p>Schematic diagram of adjacent strips.</p>
Full article ">Figure 8
<p>Large region imaging results. (<b>a</b>,<b>c</b>) are the imaging results of Sichuan and Yunnan using the GHA; (<b>b</b>,<b>d</b>) are the imaging results of Sichuan and Yunnan using the TG-GHA.</p>
Full article ">Figure 9
<p>The variation of grid coverage rate with increasing number of selected stripes. (<b>a</b>) Sichuan; (<b>b</b>) Yunnan.</p>
Full article ">Figure 10
<p>Large region imaging results using the approximation algorithm. (<b>a</b>–<b>d</b>) Sichuan; (<b>e</b>–<b>h</b>) Yunnan.</p>
Full article ">
20 pages, 1544 KiB  
Article
Hyperspectral Image Classification Using Spectral–Spatial Double-Branch Attention Mechanism
by Jianfang Kang, Yaonan Zhang, Xinchao Liu and Zhongxin Cheng
Remote Sens. 2024, 16(1), 193; https://doi.org/10.3390/rs16010193 - 2 Jan 2024
Cited by 10 | Viewed by 4132
Abstract
In recent years, deep learning methods utilizing convolutional neural networks have been extensively employed in hyperspectral image classification (HSI) applications. Nevertheless, while a substantial number of stacked 3D convolutions can indeed achieve high classification accuracy, they also introduce a significant number of parameters [...] Read more.
In recent years, deep learning methods utilizing convolutional neural networks have been extensively employed in hyperspectral image classification (HSI) applications. Nevertheless, while a substantial number of stacked 3D convolutions can indeed achieve high classification accuracy, they also introduce a significant number of parameters to the model, resulting in inefficiency. Furthermore, such intricate models often exhibit limited classification accuracy when confronted with restricted sample data, i.e., small sample problems. Therefore, we propose a spectral–spatial double-branch network (SSDBN) with an attention mechanism for HSI classification. The SSDBN is designed with two independent branches to extract spectral and spatial features, respectively, incorporating multi-scale 2D convolution modules, long short-term memory (LSTM), and an attention mechanism. The flexible use of 2D convolution, instead of 3D convolution, significantly reduces the model’s parameter count, while the effective spectral–spatial double-branch feature extraction method allows SSDBN to perform exceptionally well in handling small sample problems. When tested on 5%, 0.5%, and 5% of the Indian Pines, Pavia University, and Kennedy Space Center datasets, SSDBN achieved classification accuracies of 97.56%, 96.85%, and 98.68%, respectively. Additionally, we conducted a comparison of training and testing times, with results demonstrating the remarkable efficiency of SSDBN. Full article
(This article belongs to the Special Issue Advances in Hyperspectral Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Architecture of the proposed model.</p>
Full article ">Figure 2
<p>Architecture of an RNN.</p>
Full article ">Figure 3
<p>Overall structure of an LSTM model.</p>
Full article ">Figure 4
<p>Illustration of SSPM. The spectral sequence of size B is processed through four paths into four subsequences of size B, B/2, B/4, and B/8, where the subsequence of size B is identical to the original spectral sequence. Conv_1, Conv_2, and Conv_3 are all 1D convolutions with kernel_number = 1 and kernel_ size = 3. Pooling_1, Pooling_2 and Pooling_3 are 1D average pooling layers with kernel_sizes of 2, 4, and 8 and strides of 2, 4, and 8, respectively.</p>
Full article ">Figure 5
<p>Illustration of the multi-scale information extraction operation in this paper; the structure of the multi-scale 2D convolution module used is marked with a black dashed box.</p>
Full article ">Figure 6
<p>Diagram of the ECA module. The given input data will first go through global average pooling channel by channel to obtain the aggregated features. ECA will consider each channel and its <span class="html-italic">k</span> nearest neighbors to complete the computation of channel weights by fast 1D convolution. <span class="html-italic">k</span> is determined adaptively by mapping the channel dimension <span class="html-italic">B</span>.</p>
Full article ">Figure 7
<p>Diagram of the CBAM module.</p>
Full article ">Figure 8
<p>Detailed information about the IP dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Labels illustration.</p>
Full article ">Figure 9
<p>Detailed information of the UP dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Labels illustration.</p>
Full article ">Figure 10
<p>Detailed information of the KSC dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Labels illustration.</p>
Full article ">Figure 11
<p>Classification maps on IP dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) SVM-RBF. (<b>d</b>) M3D-CNN. (<b>e</b>) SSRN. (<b>f</b>) HybridSN. (<b>g</b>) DBMA. (<b>h</b>) Proposed.</p>
Full article ">Figure 12
<p>Classification maps on UP dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) SVM-RBF. (<b>d</b>) M3D-CNN. (<b>e</b>) SSRN. (<b>f</b>) HybridSN. (<b>g</b>) DBMA. (<b>h</b>) Proposed.</p>
Full article ">Figure 13
<p>Classification maps on KSC dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) SVMRBF. (<b>d</b>) M3D-CNN. (<b>e</b>) SSRN. (<b>f</b>) HybridSN. (<b>g</b>) DBMA. (<b>h</b>) Proposed.</p>
Full article ">Figure 14
<p>OA of different methods with different proportions of training samples on the datasets of (<b>a</b>) IP, (<b>b</b>) UP, and (<b>c</b>) KSC.</p>
Full article ">Figure 15
<p>Impact of the attention mechanism.</p>
Full article ">Figure 16
<p>OA results using different numbers of multi-scale modules.</p>
Full article ">
22 pages, 12365 KiB  
Article
Development and Evaluation of a Cloud-Gap-Filled MODIS Normalized Difference Snow Index Product over High Mountain Asia
by Gang Deng, Zhiguang Tang, Chunyu Dong, Donghang Shao and Xin Wang
Remote Sens. 2024, 16(1), 192; https://doi.org/10.3390/rs16010192 - 2 Jan 2024
Cited by 60 | Viewed by 2685
Abstract
Accurate snow cover data are critical for understanding the Earth’s climate system, and exploring hydrological processes and regional water resource management over High Mountain Asia (HMA). However, satellite-based remote sensing observations of snow cover have inevitable data gaps originating from cloud cover, sensor, [...] Read more.
Accurate snow cover data are critical for understanding the Earth’s climate system, and exploring hydrological processes and regional water resource management over High Mountain Asia (HMA). However, satellite-based remote sensing observations of snow cover have inevitable data gaps originating from cloud cover, sensor, orbital limitations and other factors. Here an effective cloud-gap-filled (CGF) method was developed to fully fill the data gaps in Moderate Resolution Imaging Spectroradiometer (MODIS) normalized difference snow index (NDSI) product. The CGF method combines the respective strengths of the cubic spline interpolation method and the spatio-temporal weighted method for generating the CGF Terra-Aqua MODIS NDSI product over HMA from 2000 to 2021. Based on the validation results of in situ snow-depth observations, the CGF NDSI product achieves a high range overall accuracy (OA) of 93.54–98.08%, a low range underestimation error (MU) of 0.15–3.49% and an acceptable range overestimation error (MO) of 0.84–5.77%. Based on the validation results of high-resolution Landsat images, this product achieves the OA of 88.52–92.40%, the omission error (OE) of 1.42–10.28% and the commission error (CE) of 5.97–17.58%. The CGF MODIS NDSI product can provide scientific support for eco-environment sustainable management in the high mountain region. Full article
Show Figures

Figure 1

Figure 1
<p>Topography and location of the HMA. In names of subregions from RGI v6.0, these individual letters ‘W’, ‘C’, ‘E’ and ‘S’ correspond to west, central, east and south, respectively. The location of meteorological stations and Landsat-8 OLI scenes utilized for validation, and the verification regions are also shown.</p>
Full article ">Figure 2
<p>Schematic of the generation procedure and evaluation of the CGF NDSI product.</p>
Full article ">Figure 3
<p>(<b>a</b>) The mean frequency of CPD from 2017 to 2021, and (<b>b</b>) MAE and (<b>c</b>) RMSE of retrieved NDSI data using different TI methods and STW method under different CPD conditions. The dashed line indicates that CPD is equal to 8 d.</p>
Full article ">Figure 4
<p>Comparison of TAC NDSI (column 1), CSI NDSI (column 4), STW NDSI (column 5) and CSI-STW NDSI (column 6) in the three regions (R1, R2 and R3), for 10 November 2020.</p>
Full article ">Figure 5
<p>Comparison of TAC BSC (row 1), Landsat BSC (row 2), Landsat BSC aggregation (row 3), CSI-STW BSC (row 4) corresponding to four Landsat scenes (column 1–4).</p>
Full article ">Figure 6
<p>A sequence of the CGF NDSI collection from 1 April 2020 to 21 June 2020.</p>
Full article ">Figure 7
<p>The spatial distribution of the average SCD for HMA (elevation greater than 1500 m) from 2001 to 2021.</p>
Full article ">Figure 8
<p>Dynamic variations in monthly SCE (including multi-year mean monthly SCE), daily SCE and yearly SCE (including the maximum SCE, mean SCE and minimum SCE) for HMA (elevation greater than 1500 m) from 2000 to 2021.</p>
Full article ">
20 pages, 8019 KiB  
Article
An Embedded-GPU-Based Scheme for Real-Time Imaging Processing of Unmanned Aerial Vehicle Borne Video Synthetic Aperture Radar
by Tao Yang, Xinyu Zhang, Qingbo Xu, Shuangxi Zhang and Tong Wang
Remote Sens. 2024, 16(1), 191; https://doi.org/10.3390/rs16010191 - 2 Jan 2024
Cited by 1 | Viewed by 2369
Abstract
The UAV-borne video SAR (ViSAR) imaging system requires miniaturization, low power consumption, high frame rates, and high-resolution real-time imaging. In order to satisfy the requirements of real-time imaging processing for the UAV-borne ViSAR under limited memory and parallel computing resources, this paper proposes [...] Read more.
The UAV-borne video SAR (ViSAR) imaging system requires miniaturization, low power consumption, high frame rates, and high-resolution real-time imaging. In order to satisfy the requirements of real-time imaging processing for the UAV-borne ViSAR under limited memory and parallel computing resources, this paper proposes a method of embedded GPU-based real-time imaging processing for the UAV-borne ViSAR. Based on a parallel programming model of the compute unified device architecture (CUDA), this paper designed a parallel computing method for range-Doppler (RD) and map drift (MD) algorithms. By utilizing the advantages of the embedded GPU characterized with parallel computing, we improved the processing speed of real-time ViSAR imaging. This paper also adopted a unified memory management method, which greatly reduces data replication and communication latency between the CPU and the GPU. The data processing of 2048 × 2048 points took only 1.215 s on the Jetson AGX Orin platform to form a nine-consecutive-frame image with a resolution of 0.15 m, with each frame taking only 0.135 s, enabling real-time imaging at a high frame rate of 5 Hz. In actual testing, continuous mapping can be achieved without losing the scenes, intuitively obtaining the dynamic observation effects of the area. The processing results of the measured data have verified the reliability and effectiveness of the proposed scheme, satisfying the processing requirements for real-time ViSAR imaging. Full article
(This article belongs to the Special Issue Radar and Microwave Sensor Systems: Technology and Applications)
Show Figures

Figure 1

Figure 1
<p>Imaging mode of the video SAR.</p>
Full article ">Figure 2
<p>Flowchart of the high-frame-rate ViSAR real-time imaging algorithm.</p>
Full article ">Figure 3
<p>Flowchart of implementing the range-Doppler algorithm on an embedded GPU.</p>
Full article ">Figure 4
<p>Flowchart of the map drift algorithm.</p>
Full article ">Figure 5
<p>Flowchart of error estimation.</p>
Full article ">Figure 6
<p>System architecture of a heterogeneous computer: (<b>a</b>) discrete architecture; (<b>b</b>) integrated architecture.</p>
Full article ">Figure 7
<p>Aligned and pinned memory access.</p>
Full article ">Figure 8
<p>Unaligned and unpinned memory access.</p>
Full article ">Figure 9
<p>The embedded GPU ViSAR high-frame-rate imaging system.</p>
Full article ">Figure 10
<p>Atmospheric attenuation map.</p>
Full article ">Figure 11
<p>Real-time imaging results of the ViSAR based on the Jetson AGX Orin platform: (<b>a</b>) first-frame image; (<b>b</b>) second-frame image; (<b>c</b>) third-frame image; (<b>d</b>) fourth-frame image; (<b>e</b>) fifth-frame image; (<b>f</b>) sixth-frame image; (<b>g</b>) seventh-frame image; (<b>h</b>) eighth-frame image; (<b>i</b>) ninth-frame image.</p>
Full article ">Figure 11 Cont.
<p>Real-time imaging results of the ViSAR based on the Jetson AGX Orin platform: (<b>a</b>) first-frame image; (<b>b</b>) second-frame image; (<b>c</b>) third-frame image; (<b>d</b>) fourth-frame image; (<b>e</b>) fifth-frame image; (<b>f</b>) sixth-frame image; (<b>g</b>) seventh-frame image; (<b>h</b>) eighth-frame image; (<b>i</b>) ninth-frame image.</p>
Full article ">Figure 12
<p>One-frame imaging results of the ViSAR: (<b>a</b>) Jetson AGX Orin; (<b>b</b>) MATLAB.</p>
Full article ">Figure 13
<p>ViSAR imaging differences.</p>
Full article ">
23 pages, 11832 KiB  
Article
Extraction of Building Roof Contours from Airborne LiDAR Point Clouds Based on Multidirectional Bands
by Jingxue Wang, Dongdong Zang, Jinzheng Yu and Xiao Xie
Remote Sens. 2024, 16(1), 190; https://doi.org/10.3390/rs16010190 - 2 Jan 2024
Cited by 4 | Viewed by 2116
Abstract
Because of the complex structure and different shapes of building contours, the uneven density distribution of airborne LiDAR point clouds, and occlusion, existing building contour extraction algorithms are subject to such problems as poor robustness, difficulty with setting parameters, and low extraction efficiency. [...] Read more.
Because of the complex structure and different shapes of building contours, the uneven density distribution of airborne LiDAR point clouds, and occlusion, existing building contour extraction algorithms are subject to such problems as poor robustness, difficulty with setting parameters, and low extraction efficiency. To solve these problems, a building contour extraction algorithm based on multidirectional bands was proposed in this study. Firstly, the point clouds were divided into bands with the same width in one direction, the points within each band were vertically projected on the central axis in the band, the two projection points with the farthest distance were determined, and their corresponding original points were regarded as the roof contour points; given that the contour points obtained based on single-direction bands were sparse and discontinuous, different banding directions were selected to repeat the above contour point marking process, and the contour points extracted from the different banding directions were integrated as the initial contour points. Then, the initial contour points were sorted and connected according to the principle of joining the nearest points in the forward direction, and the edges with lengths greater than a given threshold were recognized as long edges, which remained to be further densified. Finally, each long edge was densified by selecting the noninitial contour point closest to the midpoint of the long edge, and the densification process was repeated for the updated long edge. In the end, a building roof contour line with complete details and topological relationships was obtained. In this study, three point cloud datasets of representative building roofs were chosen for experiments. The results show that the proposed algorithm can extract high-quality outer contours from point clouds with various boundary structures, accompanied by strong robustness for point clouds differing in density and density change. Moreover, the proposed algorithm is characterized by easily setting parameters and high efficiency for extracting outer contours. Specific to the experimental data selected for this study, the PoLiS values in the outer contour extraction results were always smaller than 0.2 m, and the RAE values were smaller than 7%. Hence, the proposed algorithm can provide high-precision outer contour information on buildings for applications such as 3D building model reconstruction. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud II)
Show Figures

Figure 1

Figure 1
<p>Flowchart of building roof’s outer contour extraction.</p>
Full article ">Figure 2
<p>Schematic diagram of single-direction banding and contour point extraction: (<b>a</b>) building roof point cloud; (<b>b</b>) contour polygon; (<b>c</b>) single-direction banding; (<b>d</b>) contour points extracted in band.</p>
Full article ">Figure 3
<p>Mean square error and running time under the different numbers of anchor points.</p>
Full article ">Figure 4
<p>Contour extraction under single-direction banding with different <span class="html-italic">W</span>: (<b>a</b>) <span class="html-italic">W</span> = 2<span class="html-italic">d</span>; (<b>b</b>) <span class="html-italic">W</span> = 8<span class="html-italic">d</span>. Notes: The yellow line is the banding boundary, the green and blue points are noncontour points of different bands, and the red points are extracted contour points.</p>
Full article ">Figure 5
<p>Contour point extraction based on single-direction banding.</p>
Full article ">Figure 6
<p>Influence of the angle between the contour line and banding direction on banding results: (<b>a</b>) schematic diagram of the bands when <span class="html-italic">η</span> &gt; 0°; (<b>b</b>) schematic diagram of the bands when <span class="html-italic">η</span> = 0°; (<b>c</b>) distance between two adjacent contour points corresponding to the minimum angle <span class="html-italic">η</span>. The red line segment, <span class="html-italic">L<sub>b</sub></span><sub>,<span class="html-italic">c</span></sub>, represents a segment of the contour line; the black line represents the banding direction; and the yellow, dotted lines are the banding boundary.</p>
Full article ">Figure 7
<p>Maximum distance between two contour points under six-directional banding conditions.</p>
Full article ">Figure 8
<p>Influence of the different numbers of banding directions on the coherence of the contour points.</p>
Full article ">Figure 9
<p>Contour points extracted by multidirectional banding. Notes: The red points are extracted contour points, the green points are inside points.</p>
Full article ">Figure 10
<p>Schematic diagram of the sorting of the initial contour points: (<b>a</b>) generation of initial contour line; (<b>b</b>) backward search.</p>
Full article ">Figure 11
<p>Screening and optimization of long edges: (<b>a</b>) screening long edges; (<b>b</b>) long edge optimization. Notes: The red solid line is the extracted initial edge, the black line segment is a long edge to be densified, and the red dashed lines are segments after densification.</p>
Full article ">Figure 12
<p>Removing noise points from contour lines: (<b>a</b>) inclination angle view; (<b>b</b>) partial top view; (<b>c</b>) partial side view.</p>
Full article ">Figure 13
<p>Densification of initial contour: (<b>a</b>) <span class="html-italic">T</span> = 10<span class="html-italic">d</span>; (<b>b</b>) <span class="html-italic">T</span> = 5<span class="html-italic">d</span>. Notes: The red line is the extracted initial edge, the black line segments are long edges to be densified, and the red line segments corresponding to the black lines are edges after densification.</p>
Full article ">Figure 14
<p>Original point cloud data.</p>
Full article ">Figure 15
<p>Contour line extraction results under different <span class="html-italic">T</span> values: (<b>a</b>) <span class="html-italic">T</span> = 4<span class="html-italic">d</span>; (<b>b</b>) <span class="html-italic">T</span> = 5<span class="html-italic">d</span>; (<b>c</b>) <span class="html-italic">T</span> = 6<span class="html-italic">d</span>; (<b>d</b>) <span class="html-italic">T</span> = 7<span class="html-italic">d</span>; (<b>e</b>) <span class="html-italic">T</span> = 8<span class="html-italic">d</span>; (<b>f</b>) <span class="html-italic">T</span> = 9<span class="html-italic">d</span>; (<b>g</b>) <span class="html-italic">T</span> = 10<span class="html-italic">d</span>; (<b>h</b>) <span class="html-italic">T</span> = 11<span class="html-italic">d</span>.</p>
Full article ">Figure 16
<p>Contour line extraction results obtained using different algorithms for the three datasets, as shown in <a href="#remotesensing-16-00190-f013" class="html-fig">Figure 13</a>: (<b>a</b>) M<sub>1</sub>–M<sub>4</sub>; (<b>b</b>) M<sub>5</sub>–M<sub>8</sub>; (<b>c</b>) M<sub>9</sub>–M<sub>15</sub>.</p>
Full article ">Figure 16 Cont.
<p>Contour line extraction results obtained using different algorithms for the three datasets, as shown in <a href="#remotesensing-16-00190-f013" class="html-fig">Figure 13</a>: (<b>a</b>) M<sub>1</sub>–M<sub>4</sub>; (<b>b</b>) M<sub>5</sub>–M<sub>8</sub>; (<b>c</b>) M<sub>9</sub>–M<sub>15</sub>.</p>
Full article ">Figure 17
<p>PoLiS measurement analysis of the contour extraction results of the different algorithms for building point clouds: (<b>a</b>) Dataset 1; (<b>b</b>) Dataset 2; (<b>c</b>) Dataset 3.</p>
Full article ">Figure 18
<p>Dataset 4.</p>
Full article ">Figure 19
<p>Contour extraction efficiency analysis of the different algorithms.</p>
Full article ">
22 pages, 1848 KiB  
Review
GNSS Carrier-Phase Multipath Modeling and Correction: A Review and Prospect of Data Processing Methods
by Qiuzhao Zhang, Longqiang Zhang, Ao Sun, Xiaolin Meng, Dongsheng Zhao and Craig Hancock
Remote Sens. 2024, 16(1), 189; https://doi.org/10.3390/rs16010189 - 2 Jan 2024
Cited by 5 | Viewed by 5031
Abstract
A multipath error is one of the main sources of GNSS positioning errors. It cannot be eliminated by forming double-difference and other methods, and it has become an issue in GNSS positioning error processing, because it is mainly related to the surrounding environment [...] Read more.
A multipath error is one of the main sources of GNSS positioning errors. It cannot be eliminated by forming double-difference and other methods, and it has become an issue in GNSS positioning error processing, because it is mainly related to the surrounding environment of the station. To address multipath errors, three main mitigation strategies are employed: site selection, hardware enhancements, and data processing. Among these, data processing methods have been a focal point of research due to their cost-effectiveness, impressive performance, and widespread applicability. This paper focuses on the review of data processing mitigation methods for GNSS carrier-phase multipath errors. The paper begins by elucidating the origins and mitigation strategies of multipath errors. Subsequently, it reviews the current research status pertaining to data processing methods using stochastic and functional models to counter multipath errors. The paper also provides an overview of filtering techniques for extracting multipath error models from coordinate sequences or observations. Additionally, it introduces the evolution and algorithmic workflow of sidereal filtering (SF) and multipath hemispherical mapping (MHM), from both coordinate and observation domain perspectives. Furthermore, the paper emphasizes the practical significance and research relevance of multipath error processing. It concludes by delineating future research directions in the realm of multipath error mitigation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Two types of multipath effect schematic diagrams: (<b>a</b>) schematic diagram of the multipath effect of horizontal ground signal reflection; and (<b>b</b>) schematic diagram of the multipath effect of vertical building signal reflection.</p>
Full article ">Figure 2
<p>Flowchart of the SF algorithm based on the coordinate domain.</p>
Full article ">Figure 3
<p>Flowchart of the SF algorithm based on the observation domain.</p>
Full article ">Figure 4
<p>Flowchart of the MHM algorithm based on the observation domain.</p>
Full article ">
19 pages, 10172 KiB  
Article
Reconstructing Snow Cover under Clouds and Cloud Shadows by Combining Sentinel-2 and Landsat 8 Images in a Mountainous Region
by Yanli Zhang, Changqing Ye, Ruirui Yang and Kegong Li
Remote Sens. 2024, 16(1), 188; https://doi.org/10.3390/rs16010188 - 2 Jan 2024
Cited by 4 | Viewed by 2261
Abstract
Snow cover is a sensitive indicator of global climate change, and optical images are an important means for monitoring its spatiotemporal changes. Due to the high reflectivity, rapid change, and intense spatial heterogeneity of mountainous snow cover, Sentinel-2 (S2) and Landsat 8 (L8) [...] Read more.
Snow cover is a sensitive indicator of global climate change, and optical images are an important means for monitoring its spatiotemporal changes. Due to the high reflectivity, rapid change, and intense spatial heterogeneity of mountainous snow cover, Sentinel-2 (S2) and Landsat 8 (L8) satellite imagery with both high spatial resolution and spectral resolution have become major data sources. However, optical sensors are more susceptible to cloud cover, and the two satellite images have significant spectral differences, making it challenging to obtain snow cover beneath clouds and cloud shadows (CCSs). Based on our previously published approach for snow reconstruction on S2 images using the Google Earth Engine (GEE), this study introduces two main innovations to reconstruct snow cover: (1) combining S2 and L8 images and choosing different CCS detection methods, and (2) improving the cloud shadow detection algorithm by considering land cover types, thus further improving the mountainous-snow-monitoring ability. The Babao River Basin of the Qilian Mountains in China is chosen as the study area; 399 scenes of S2 and 35 scenes of L8 are selected to analyze the spatiotemporal variations of snow cover from September 2019 to August 2022 in GEE. The results indicate that the snow reconstruction accuracies of both images are relatively high, and the overall accuracies for S2 and L8 are 80.74% and 88.81%, respectively. According to the time-series analysis of three hydrological years, it is found that there is a marked difference in the spatial distribution of snow cover in different hydrological years within the basin, with fluctuations observed overall. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Flowchart of this algorithm.</p>
Full article ">Figure 3
<p>Comparative analysis of cloud-free snow cover detection using S2, L8, and GF-2 images. (<b>a</b>–<b>c</b>) Original images of Sentinel-2 on 13 January 2020, GF-2, and Landsat 8 on 14 January 2020; (<b>d</b>–<b>f</b>) Clear-sky snow cover of S2 on 13 January 2020, GF-2, and Landsat 8 on 14 January 2020.</p>
Full article ">Figure 4
<p>Comparison of CCS detection results between the improved Fmask4.0 and the original algorithm on S2 and L8 images: (<b>a</b>–<b>h</b>) S2 original image, clouds, and cloud shadows acquired by the improved and original Fmask4.0 on snow-covered and snow-free surfaces; (<b>i</b>–<b>p</b>) L8 original image, clouds, and cloud shadows acquired by the improved and original Fmask4.0 on snow-covered and snow-free surfaces.</p>
Full article ">Figure 5
<p>Accuracy evaluation of S2 and L8 snow cover reconstruction under CCSs: (<b>a</b>–<b>c</b>) The original image of GF-2, cloud-free snow cover, and snow cover under CCSs; (<b>d</b>–<b>f</b>) The original image of S2, cloud-free snow cover, and snow cover under CCSs obtained using the improved SNOWL algorithm; (<b>g</b>–<b>i</b>) The original image of S2, cloud-free snow cover, and snow cover under CCSs; (<b>j</b>–<b>l</b>) The original image of L8, cloud-free snow cover, and snow cover under CCSs obtained using the improved SNOWL algorithm.</p>
Full article ">Figure 6
<p>Variation characteristics of SCR of three hydrological years in the BRB.</p>
Full article ">Figure 7
<p>Spatiotemporal variations of snow cover from October 2021 to May 2022.</p>
Full article ">Figure 8
<p>Relationship curve of SCR with temperature.</p>
Full article ">Figure 9
<p>Time-series snow cover variation in the BRB by combining Sentinel-2 and Landsat 8 in the 2019–2020 hydrological year.</p>
Full article ">Figure 10
<p>Inter-annual variation of snow cover area and the relationship with daily average air temperature in three hydrological years.</p>
Full article ">
19 pages, 6182 KiB  
Article
Noninvasive Early Detection of Nutrient Deficiencies in Greenhouse-Grown Industrial Hemp Using Hyperspectral Imaging
by Alireza Sanaeifar, Ce Yang, An Min, Colin R. Jones, Thomas E. Michaels, Quinton J. Krueger, Robert Barnes and Toby J. Velte
Remote Sens. 2024, 16(1), 187; https://doi.org/10.3390/rs16010187 - 2 Jan 2024
Cited by 4 | Viewed by 3642
Abstract
Hyperspectral imaging is an emerging non-invasive technology with potential for early nutrient stress detection in plants prior to visible symptoms. This study evaluated hyperspectral imaging for early identification of nitrogen, phosphorus, and potassium (NPK) deficiencies across three greenhouse-grown industrial hemp plant cultivars ( [...] Read more.
Hyperspectral imaging is an emerging non-invasive technology with potential for early nutrient stress detection in plants prior to visible symptoms. This study evaluated hyperspectral imaging for early identification of nitrogen, phosphorus, and potassium (NPK) deficiencies across three greenhouse-grown industrial hemp plant cultivars (Cannabis sativa L.). Visible and near-infrared spectral data (380–1022 nm) were acquired from hemp samples subjected to controlled NPK stresses at multiple developmental timepoints using a benchtop hyperspectral camera. Robust principal component analysis was developed for effective screening of spectral outliers. Partial least squares discriminant analysis (PLS-DA) and support vector machines (SVM) were developed and optimized to classify nutrient deficiencies using key wavelengths selected by variable importance in projection (VIP) and interval partial least squares (iPLS). The 16-wavelength iPLS-C-SVM model achieved the highest precision of 0.75 to 1 on the test dataset. Key wavelengths for effective nutrient deficiency detection spanned the visible range, underscoring the hyperspectral imaging sensitivity to early changes in leaf pigment levels prior to any visible symptom development. The emergence of wavelengths related to chlorophyll, carotenoid, and anthocyanin absorption as optimal for classification, highlights the technology’s capacity to detect subtle impending biochemical perturbations linked to emerging deficiencies. Identifying stress at this pre-visual stage could provide hemp producers with timely corrective action to mitigate losses in crop quality and yields. Full article
(This article belongs to the Special Issue Proximal and Remote Sensing for Precision Crop Management II)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Industrial hemp plants in the greenhouse; (<b>b</b>) setup of spectral measurement equipment for hemp plants using a hyperspectral camera and halogen lighting; (<b>c</b>) NDVI analysis to distinguish hemp plants from the background; (<b>d</b>) average spectral signatures of control and nutrient-deficient Atlas Wilhelmina hemp plants (sample size = 108), with shaded areas indicating standard deviation (SD = 0.0125–0.226).</p>
Full article ">Figure 2
<p>RPCA model analysis for outlier detection. (<b>a</b>) Hotelling T2, (<b>b</b>) principal component 1 (PC1) and (<b>c</b>) principal component 2 (PC2) scores versus Q residual scatter plot. Samples categorized as outliers (exceeding 95% confidence level, blue dashed line) are indicated in red, while retained samples are colored gray.</p>
Full article ">Figure 3
<p>(<b>a</b>) Predicted Y values from the PLS-DA model to distinguish between the three hemp cultivars. (<b>b</b>) VIP scores indicating the importance of wavelengths for cultivar discrimination.</p>
Full article ">Figure 4
<p>(<b>a</b>) Predicted Y values from the PLS-DA model for early detection of nutrient stress across control and deficient groups. (<b>b</b>) VIP scores indicating the most important wavelengths for discriminating nutrient deficiency groups.</p>
Full article ">Figure 5
<p>ROC curve and threshold analysis for temporal classification of nutrient stress in hemp plants.</p>
Full article ">Figure 6
<p>Visualization of the optimal SVM parameter configuration for the iPLS-C-SVM and iPLS-Nu-SVM models, denoted by “X”, that minimized misclassification errors.</p>
Full article ">Figure 7
<p>Predicted class probabilities for different hemp cultivars across the four nutrient deficiency classes: control (CK), and deficient stages at time points T1, T2, and T3 (T1-ND, T2-ND, T3-ND).</p>
Full article ">Figure 8
<p>(<b>a</b>) Location and (<b>b</b>) relative importance of the 16 optimal wavelengths identified for nutrient deficiency detection using iPLS analysis.</p>
Full article ">
17 pages, 11300 KiB  
Technical Note
Three-Dimensional Resistivity and Chargeability Tomography with Expanding Gradient and Pole–Dipole Arrays in a Polymetallic Mine, China
by Meng Wang, Junlu Wang, Pinrong Lin and Xiaohong Meng
Remote Sens. 2024, 16(1), 186; https://doi.org/10.3390/rs16010186 - 1 Jan 2024
Cited by 1 | Viewed by 2208
Abstract
Three-dimensional resistivity/chargeability tomography based on distributed data acquisition technology is likely to provide abundant information for mineral exploration. To realize true 3D tomography, establishing transmitter sources with different injection directions and collecting vector signals at receiver points is necessary. We implemented 3D resistivity/ [...] Read more.
Three-dimensional resistivity/chargeability tomography based on distributed data acquisition technology is likely to provide abundant information for mineral exploration. To realize true 3D tomography, establishing transmitter sources with different injection directions and collecting vector signals at receiver points is necessary. We implemented 3D resistivity/ chargeability tomography to search for new ore bodies in the deep and peripheral areas of Huaniushan, China. A distributed data acquisition system was used to form a vector receiver array in the survey area. First, by using the expanding gradient array composed of 11 pairs of transmitter electrodes, we quickly obtained the 3D distributions of the resistivity and chargeability of the whole area. Based on the electrical structure and geological setting, a NE-striking potential area for mineral exploration was determined. Next, a pole–dipole array was employed to depict the locations and shapes of the potential ore bodies in detail. The results showed that the inversion data for the two arrays corresponded well with the known geological setting and that the ore veins controlled by boreholes were located in the low-resistivity and high-chargeability zone. These results provided data for future mineral evaluation. Further research showed that true 3D tomography has obvious advantages over quasi-3D tomography. The expanding gradient array, characterized by a good signal strength and field efficiency, was suitable for the target determination in the early exploration stage. The pole–dipole array with high spatial resolution can be used for detailed investigations. Choosing a reasonable data acquisition scheme is helpful to improve the spatial resolution and economic efficiency. Full article
(This article belongs to the Topic Green Mining)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the different 3D observation configurations. (<b>a</b>) The S-shaped electrodes arrangement using a multi-branch layout of multicore cables (after Loke, et al. [<a href="#B20-remotesensing-16-00186" class="html-bibr">20</a>]). (<b>b</b>) The double offset pole–dipole array (after White et al. [<a href="#B22-remotesensing-16-00186" class="html-bibr">22</a>]). (<b>c</b>) The pole–L-shaped-dipole array using distributed acquisition systems (after Sun, et al. [<a href="#B33-remotesensing-16-00186" class="html-bibr">33</a>]).</p>
Full article ">Figure 2
<p>The regional geological map of Huaniushan in the Beishan area. The tectonic setting of the Beishan area is located at the junction of the Tarim plate, North China plate and the Central Asia orogenic belt [<a href="#B38-remotesensing-16-00186" class="html-bibr">38</a>,<a href="#B39-remotesensing-16-00186" class="html-bibr">39</a>].</p>
Full article ">Figure 3
<p>Geological and array arrangement map [<a href="#B39-remotesensing-16-00186" class="html-bibr">39</a>]. Data collection in the study area was conducted using the expanding gradient array (<b>the left image</b>). Data collection in the blue-boxed area was carried out using the pole–dipole array (<b>the right image</b>).</p>
Full article ">Figure 4
<p>The receivers of the DEM system.</p>
Full article ">Figure 5
<p>Sensitivity distribution of the different arrays. (<b>a</b>) Gradient array with AB = 1 km (<b>a1</b>) and AB = 3 km (<b>a2</b>); (<b>b</b>) pole–dipole array with AM = 0.2 km (<b>b1</b>) and AM = 1.5 km (<b>b2</b>).</p>
Full article ">Figure 6
<p>Apparent resistivity data from the expanding gradient array with AB = 2 km (<b>a1</b>–<b>a4</b>) and 4 km (<b>b1</b>–<b>b4</b>). Subfigures (<b>c1</b>–<b>c4</b>) and (<b>d1</b>–<b>d4</b>) display the apparent chargeability data for AB = 2 km and AB = 4 km, respectively. The transmitter–receiver directions are marked at the bottom of the figure panels. TR: current injection direction. RE: receiving direction.</p>
Full article ">Figure 7
<p>Apparent resistivity and apparent chargeability data from the pole–dipole array for five transmitter electrodes: A<sub>1</sub> (<b>a1</b>,<b>b1</b>,<b>c1</b>,<b>d1</b>); A<sub>2</sub> (<b>a2</b>,<b>b2</b>,<b>c2</b>,<b>d2</b>); A<sub>3</sub> (<b>a3</b>,<b>b3</b>,<b>c3</b>,<b>d3</b>); A<sub>4</sub> (<b>a4</b>,<b>b4</b>,<b>c4</b>,<b>d4</b>); and A<sub>5</sub> (<b>a5</b>,<b>b5</b>,<b>c5</b>,<b>d5</b>). TR: transmitter electrode. RE: receiving direction.</p>
Full article ">Figure 8
<p>3D inversion results from the expanding gradient array. (<b>a</b>) 3D distribution of the resistivity; (<b>b</b>) 3D distribution of the chargeability; (<b>c</b>) 3D morphology of the low-resistivity and high-chargeability bodies. In panel c, the low-resistivity bodies (less than 225 Ω·m) are displayed in cool colors, consistent with the color bar in panel a. The high-chargeability bodies (greater than 12%) are displayed in warm colors, consistent with the color bar in panel b. Zone ①–⑦ represent resistivity anomalies.</p>
Full article ">Figure 9
<p>The inversion results of the expanding gradient array presented as a horizontal slice along the depth of 80 m, where (<b>a</b>) represents the resistivity and (<b>b</b>) represents the chargeability deep slices along z = 80 m. Zone ①–⑦ represent chargeability anomalies.</p>
Full article ">Figure 10
<p>Slices of the 3D inversion results from the expanding gradient array along line AA′. (<b>a</b>) Geological profile of line AA′ (after Yang et al., 2010a [<a href="#B39-remotesensing-16-00186" class="html-bibr">39</a>]); (<b>b</b>) resistivity profile; (<b>c</b>) chargeability profile. The section along line AA′ (magenta line) is shown in <a href="#remotesensing-16-00186-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 11
<p>3D inversion results from the pole–dipole array. (<b>a</b>) 3D distribution of the resistivity; (<b>b</b>) 3D distribution of the chargeability; (<b>c</b>) 3D morphology of the low-resistance and high-chargeability bodies. In panel c, the low-resistivity bodies (less than 225 Ω·m) are displayed in cool colors, consistent with the color bar in panel a. The high-chargeability bodies (greater than 12%) are displayed in warm colors, consistent with the color bar in panel b.</p>
Full article ">Figure 12
<p>Slices of the 3D inversion results with a pole–dipole array along line AA′. (<b>a</b>) Geological profile; (<b>b</b>) resistivity profile; (<b>c</b>) chargeability profile.</p>
Full article ">Figure 13
<p>Inversion results using different data combinations with an expanding gradient array. (<b>a1</b>) is the 3D distribution of the resistivity with the nearly N–S-trending transmitter and nearly N–S-trending receiver. (<b>b1</b>) is the 3D distribution of the resistivity with the nearly E–W-trending transmitter and nearly E–W-trending receiver. (<b>c1</b>) is a 3D resistivity model obtained by combining the data used in <a href="#remotesensing-16-00186-f013" class="html-fig">Figure 13</a>a and <a href="#remotesensing-16-00186-f013" class="html-fig">Figure 13</a>b for inversion. (<b>a2</b>), (<b>b2</b>) and (<b>c2</b>) are the slices of (<b>a1</b>), (<b>b1</b>) and (<b>c1</b>) along line AA′, respectively.</p>
Full article ">Figure 14
<p>Inversion results using different data combinations with the pole–dipole array. (<b>a1</b>) and (<b>b1</b>) are the resistivity inversion results of the nearly S–N-trending and E–W-trending data, respectively. (<b>a2</b>) and (<b>b2</b>) are the slices of (<b>a1</b>) and (<b>b1</b>) along line AA′, respectively.</p>
Full article ">
21 pages, 5571 KiB  
Article
Coastline Monitoring and Prediction Based on Long-Term Remote Sensing Data—A Case Study of the Eastern Coast of Laizhou Bay, China
by Ke Mu, Cheng Tang, Luigi Tosi, Yanfang Li, Xiangyang Zheng, Sandra Donnici, Jixiang Sun, Jun Liu and Xuelu Gao
Remote Sens. 2024, 16(1), 185; https://doi.org/10.3390/rs16010185 - 1 Jan 2024
Cited by 4 | Viewed by 2991
Abstract
Monitoring shoreline movements is essential for understanding the impact of anthropogenic activities and climate change on the coastal zone dynamics. The use of remote sensing allows for large-scale spatial and temporal studies to better comprehend current trends. This study used Landsat 5 (TM), [...] Read more.
Monitoring shoreline movements is essential for understanding the impact of anthropogenic activities and climate change on the coastal zone dynamics. The use of remote sensing allows for large-scale spatial and temporal studies to better comprehend current trends. This study used Landsat 5 (TM), Landsat 8 (OLI), and Sentinel-2 (MSI) remote sensing images, together with the Otsu algorithm, marching squares algorithm, and tidal correction algorithm, to extract and correct the coastline positions of the east coast of Laizhou Bay in China from 1984 to 2022. The results indicate that 89.63% of the extracted shoreline segments have an error less than 30 m compared to the manually drawn coastline. The total length of the coastline increased from 166.90 km to 364.20 km, throughout the observation period, with a length change intensity (LCI) of 3.11% due to the development of coastal protection and engineering structures for human activities. The anthropization led to a decrease in the natural coastline from 83.33% to 13.89% and a continuous increase in the diversity and human use of the coastline. In particular, the index of coastline diversity (ICTD) and the index of coastline utilization degree (ICUD) increased from 0.39 to 0.79, and from 153.30 to 390.37, respectively. Over 70% of the sandy beaches experienced erosional processes. The shoreline erosion calculated using the end point rate (EPR) and the linear regression rate (LRR) is 79.54% and 85.58%, respectively. The fractal dimension of the coastline shows an increasing trend and is positively correlated with human activities. Coastline changes are primarily attributed to interventions such as land reclamation, aquaculture development, and port construction resulting in the creation of 10,000.20 hectares of new coastal areas. Finally, the use of Kalman filtering for the first time made it possible to predict that approximately 84.58% of the sandy coastline will be eroded to varying degrees by 2032. The research results can provide valuable reference for the scientific planning and rational utilization of resources on the eastern coast of Laizhou Bay. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Regional framework of the study area: (<b>a</b>) Bohai Bay and Shandong Province; (<b>b</b>) Laizhou Bay and Yantai. (<b>c</b>) The analyzed coastline.</p>
Full article ">Figure 2
<p>(<b>a</b>) Tidal correction schematic diagram: <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">L</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">L</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> represent the instantaneous water boundary extracted at two different time points and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">L</mi> </mrow> <mrow> <mi mathvariant="normal">h</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">g</mi> <mi mathvariant="normal">h</mi> </mrow> </msub> </mrow> </semantics></math> is the position of the average high tide line, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">H</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">H</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">H</mi> </mrow> <mrow> <mi mathvariant="normal">h</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">g</mi> <mi mathvariant="normal">h</mi> </mrow> </msub> </mrow> </semantics></math> refer to the corresponding tide levels for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">L</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">L</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">L</mi> </mrow> <mrow> <mi mathvariant="normal">h</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">g</mi> <mi mathvariant="normal">h</mi> </mrow> </msub> </mrow> </semantics></math>, θ is the slope angle of the beach. <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>h</mi> </mrow> </semantics></math> is tidal range. <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>l</mi> </mrow> </semantics></math> is vertical distance; (<b>b</b>) schematic diagram of the coastline correction process (The bottom image shows the image of Sentinel-2 in 2022).</p>
Full article ">Figure 3
<p>Results of the accuracy assessment of the extracted shorelines: (<b>a</b>) length of overlap in different buffer zone ranges; (<b>b</b>) proportion of overlap length.</p>
Full article ">Figure 4
<p>The spatiotemporal changes in the nine shoreline types between 1984 and 2022.</p>
Full article ">Figure 5
<p>Evolution of the percentages of the nine coastlines types from 1984 to 2022 and its relationship with the ICTD.</p>
Full article ">Figure 6
<p>The EPR of the sandy shoreline from 1984 to 2022 in the (<b>a</b>) northern, (<b>b</b>) central, (<b>c</b>) southern of the survey area, and (<b>d</b>) rate of change (<math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> <mo>·</mo> <msup> <mrow> <mi mathvariant="normal">a</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 7
<p>The LRR of the sandy shoreline from 1984 to 2022 in the (<b>a</b>) northern, (<b>b</b>) central, (<b>c</b>) southern of the survey area, and (<b>d</b>) rate of change (<math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> <mo>·</mo> <msup> <mrow> <mi mathvariant="normal">a</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 8
<p>Analysis of the linear fit of shoreline length to fractal dimension.</p>
Full article ">Figure 9
<p>Coastal land–sea changes along the eastern coast of the Laizhou Bay from 1984 to 2022 (the background is a Sentinel-2 images from 2022).</p>
Full article ">Figure 10
<p>Evolution of the percentage of the nine coastline types from 1984 to 2022 and its relationship with ICUD by time from 1984 to 2022.</p>
Full article ">Figure 11
<p>Prediction of the sandy coastline evolution in 2032 for (<b>a</b>) Shihu Mouth, (<b>b</b>) Longkou Artificial Island, (<b>c</b>) Sanshan Island.</p>
Full article ">
19 pages, 59754 KiB  
Article
Characterization of Active Riverbed Spatiotemporal Dynamics through the Definition of a Framework for Remote Sensing Procedures
by Marta Crivellaro, Alfonso Vitti, Guido Zolezzi and Walter Bertoldi
Remote Sens. 2024, 16(1), 184; https://doi.org/10.3390/rs16010184 - 1 Jan 2024
Cited by 8 | Viewed by 2462
Abstract
The increasing availability and quality of remote sensing data are changing the methods used in fluvial geomorphology applications, allowing the observation of hydro-morpho-biodynamics processes and their spatial and temporal variations at broader and more refined scales. With the advent of cloud-based computing, it [...] Read more.
The increasing availability and quality of remote sensing data are changing the methods used in fluvial geomorphology applications, allowing the observation of hydro-morpho-biodynamics processes and their spatial and temporal variations at broader and more refined scales. With the advent of cloud-based computing, it is nowadays possible to reduce data processing time and increase code sharing, facilitating the development of reproducible analyses at regional and global scales. The consolidation of Earth Observation mission data into a single repository such as Google Earth Engine (GEE) offers the opportunity to standardize various methods found in literature, in particular those related to the identification of key geomorphological parameters. This work investigates different computational techniques and timeframes (e.g., seasonal, annual) for the automatic detection of the active river channel and its multi-temporal aggregation, proposing a rational integration of remote sensing tools into river monitoring and management. In particular, we propose a quantitative analysis of different approaches to obtain a synthetic representative image of river corridors, where each pixel is computed as a percentile of the bands (or a combination of bands) of all available images in a given time span. Synthetic images have the advantage of limiting the variability of individual images, thus providing more robust results in terms of the classification of the main components of the riverine ecosystem (sediments, water, and riparian vegetation). We apply the analysis to a set of rivers with analogous bioclimatic conditions and different levels of anthropic pressure, using a combination of Landsat and Sentinel-2 data. The results show that synthetic images derived from multispectral indexes (such as NDVI and MDWI) are more accurate than synthetic images derived from single bands. In addition, different temporal reduction statistics affect the detection of the active channel, and we suggest using the 90th percentile instead of the median to improve the detection of vegetated areas. Individual representative images are then aggregated into multitemporal maps to define a systematic and easily replicable approach for extracting active river corridors and their inherent spatial and temporal dynamics. Finally, the proposed procedure has the potential to be easily implemented and automated as a tool to provide relevant data to river managers. Full article
(This article belongs to the Special Issue Remote Sensing and GIS in Freshwater Environments)
Show Figures

Figure 1

Figure 1
<p>Temporal extent and resolution of the analysis.</p>
Full article ">Figure 2
<p>Workflow to obtain synthetic representative images of NDVI and MNDWI composites and to assess their difference. Panel 1: Temporal reducer over an ImageCollection. Panel 2: Synthetic images approaches compares: Panel 2A showsthe traditional approach of representative bands and Panel 2B outlinesthe proposed novel approach that reduces single images multispectral indexes.</p>
Full article ">Figure 3
<p>Panel (<b>A</b>): Location of the selected rivers; Panel (<b>B</b>): Selected river reaches overview. The basemap orthophoto for Shkumbin and Vjosa rivers is the 2015 ASIG Orthophoto [<a href="#B33-remotesensing-16-00184" class="html-bibr">33</a>], while the Tagliamento basemap is 2017–2020 Friuli Venezia Giulia regional Orthophoto [<a href="#B40-remotesensing-16-00184" class="html-bibr">40</a>].</p>
Full article ">Figure 4
<p>Comparison between bands and indexes approaches using representative synthetic index images (<b>a</b>) Synthetic MNDWI and NDVI indexes derived from reduced bands and reduced indexes and their histogram frequency. The red line within the histogram represents the adopted fixed threshold to classify water and non-water pixels (MNDWI) and vegetated and unvegetated pixels (NDVI). (<b>b</b>) Spatial water and vegetation masks derived from the two approaches, and distributions of MNDWI and NDVI values within the extracted masks.</p>
Full article ">Figure 5
<p>Active channel extracted with the bands approach (light blue) and with the indexes approach (yellow) compared to the digitized active channel from 2015 Orthophoto (continuous black line). The dotted line represents the 200 m buffered domain for the analysis. The difference map on the right outlines the difference between the two extracted active channels, showing the overestimation with the bands approach.</p>
Full article ">Figure 6
<p>Seasonal and annual active channel extent per each year with 50p and 90p NDVI synthetic image classification.</p>
Full article ">Figure 7
<p>Landsat: Median and 90p active channel frequency envelopes, their difference and derived multitemporal active channel with a frequency threshold t = 27%.</p>
Full article ">Figure 8
<p>Annual and seasonal active channel envelopes derived from Landsat images.</p>
Full article ">Figure 9
<p>2018–2022 annual active channel envelopes derived from Landsat and Sentinel-2 images.</p>
Full article ">Figure 10
<p>Suggested approach for riverine active channel extraction from Landsat and Sentinel-2 data within GEE.</p>
Full article ">
19 pages, 14918 KiB  
Article
Ocean Colour Atmospheric Correction for Optically Complex Waters under High Solar Zenith Angles: Facilitating Frequent Diurnal Monitoring and Management
by Yongquan Wang, Huizeng Liu, Zhengxin Zhang, Yanru Wang, Demei Zhao, Yu Zhang, Qingquan Li and Guofeng Wu
Remote Sens. 2024, 16(1), 183; https://doi.org/10.3390/rs16010183 - 31 Dec 2023
Cited by 4 | Viewed by 1780
Abstract
Accurate atmospheric correction (AC) is one fundamental and essential step for successful ocean colour remote-sensing applications. Currently, most ACs and the associated ocean colour remote-sensing applications are restricted to solar zenith angles (SZAs) lower than 70°. The ACs under high SZAs present degraded [...] Read more.
Accurate atmospheric correction (AC) is one fundamental and essential step for successful ocean colour remote-sensing applications. Currently, most ACs and the associated ocean colour remote-sensing applications are restricted to solar zenith angles (SZAs) lower than 70°. The ACs under high SZAs present degraded accuracy or even failure problems, rendering the satellite retrievals of water quality parameters more challenging. Additionally, the complexity of the bio-optical properties of the coastal waters and the presence of complex aerosols add to the difficulty of AC. To address this challenge, this study proposed an AC algorithm based on extreme gradient boosting (XGBoost) for optically complex waters under high SZAs. The algorithm presented in this research has been developed using pairs of Geostationary Ocean Colour Imager (GOCI) high-quality noontime remote-sensing reflectance (Rrs) and the Rayleigh-corrected reflectance (ρrc) derived from the Ocean Colour–Simultaneous Marine and Aerosol Retrieval Tool (OC-SMART) in the morning (08:55 LT) and at dusk (15:55 LT). The algorithm was further examined using the daily GOCI images acquired in the morning and at dusk, and the hourly (total suspended sediment) TSS concentration was also obtained based on the atmospherically corrected GOCI data. The results showed that: (i) the model produced an accurate fitting performance (R2 ≥ 0.90, RMSD ≤ 0.0034 sr−1); (ii) the model had a high validation accuracy with an independent dataset (R2 = 0.92–0.97, MAPD = 8.2–26.81% and quality assurance (QA) score = 0.9–1); and (iii) the model successfully retrieved more valid Rrs for GOCI images under high SZAs and enhanced the accuracy and coverage of TSS mapping. This algorithm has great potential to be applied to AC for optically complex waters under high SZAs, thus increasing the frequency of available observations in a day. Full article
(This article belongs to the Special Issue GIS and Remote Sensing in Ocean and Coastal Ecology)
Show Figures

Figure 1

Figure 1
<p>GOCI hourly RGB image (<b>a</b>), <span class="html-italic">ρ<sub>rc</sub></span> (555 nm) (<b>b</b>), and the corresponding Rrs (555 nm) (<b>c</b>) provided by the KOSC on 15:55 LT, 13 January 2021.</p>
Full article ">Figure 2
<p>Frequency distribution of solar zenith angle in the matchup dataset.</p>
Full article ">Figure 3
<p>Scatterplots of Rrs (λ) retrieved by the XGBAC model vs. the reference values in the training dataset at each GOCI band. The numerical values along the colour bar correspond to the pixel density on a logarithmic scale.</p>
Full article ">Figure 4
<p>Scatterplots of Rrs from the XGBAC model and the reference Rrs values of the validation dataset at each GOCI band. The numerical values along the colour bar correspond to the pixel density on a logarithmic scale.</p>
Full article ">Figure 5
<p>Scatterplots of Rrs (490 nm) retrieved from the XGBAC model vs. the values of the evaluation dataset for the different ranges of SZA. The numerical values along the colour bar correspond to the pixel density on a logarithmic scale.</p>
Full article ">Figure 6
<p>Variations in absolute percentage error of Rrs (490 nm) retrieved from the XGBAC and OC-SMART algorithms with the SZA.</p>
Full article ">Figure 7
<p>Frequency distribution of QA scores for the Rrs retrievals obtained using XGBAC and OC-SMART.</p>
Full article ">Figure 8
<p>Rrs maps at 443 nm, 555 nm and 680 nm derived by XGBAC from the GOCI data sensed on 13 January 2021 at 08:55 LT (<b>a</b>,<b>c</b>,<b>e</b>) and 15:55 LT (<b>b</b>,<b>d</b>,<b>f</b>); SZA in the 08:55 LT (<b>g</b>) and 15:55 LT (<b>h</b>).</p>
Full article ">Figure 9
<p>RPD between Rrs (555 nm) values in morning/afternoon hours and noontime observations at 12:55 LT using the XGBAC algorithm (<b>a</b>,<b>c</b>) and OC-SMART algorithm (<b>b</b>,<b>d</b>).</p>
Full article ">Figure 10
<p>The temporal CV of the pixel values examined using multiple noontime (10:55 LT–13:55 LT) Rrs (555 nm) values within a day.</p>
Full article ">Figure 11
<p>The hourly TSS maps in the YRE and HZB retrieved by Rrs product applying the XGBAC (<b>a</b>–<b>h</b>) and KOSC standard TSS products (<b>i</b>,<b>j</b>) on 13 January 2021.</p>
Full article ">
34 pages, 57413 KiB  
Article
BD-SKUNet: Selective-Kernel UNets for Building Damage Assessment in High-Resolution Satellite Images
by Seyed Ali Ahmadi, Ali Mohammadzadeh, Naoto Yokoya and Arsalan Ghorbanian
Remote Sens. 2024, 16(1), 182; https://doi.org/10.3390/rs16010182 - 31 Dec 2023
Cited by 9 | Viewed by 3127
Abstract
When natural disasters occur, timely and accurate building damage assessment maps are vital for disaster management responders to organize their resources efficiently. Pairs of pre- and post-disaster remote sensing imagery have been recognized as invaluable data sources that provide useful information for building [...] Read more.
When natural disasters occur, timely and accurate building damage assessment maps are vital for disaster management responders to organize their resources efficiently. Pairs of pre- and post-disaster remote sensing imagery have been recognized as invaluable data sources that provide useful information for building damage identification. Recently, deep learning-based semantic segmentation models have been widely and successfully applied to remote sensing imagery for building damage assessment tasks. In this study, a two-stage, dual-branch, UNet architecture, with shared weights between two branches, is proposed to address the inaccuracies in building footprint localization and per-building damage level classification. A newly introduced selective kernel module improves the performance of the model by enhancing the extracted features and applying adaptive receptive field variations. The xBD dataset is used to train, validate, and test the proposed model based on widely used evaluation metrics such as F1-score and Intersection over Union (IoU). Overall, the experiments and comparisons demonstrate the superior performance of the proposed model. In addition, the results are further confirmed by evaluating the geographical transferability of the proposed model on a completely unseen dataset from a new region (Bam city earthquake in 2003). Full article
(This article belongs to the Special Issue Artificial Intelligence for Natural Hazards (AI4NH))
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Number of image pairs in each disaster type per group in the xBD dataset.</p>
Full article ">Figure 2
<p>Geographical distribution and type of disasters in the xBD dataset. The corresponding number of building damage classes in each disaster is shown in pie charts. Each disaster is pinned on the map by a relative icon that shows its type. On the pie charts, green, light yellow, orange, and red show no-damage, minor-damage, major-damage, and destroyed classes, respectively. Black represents the unclassified buildings.</p>
Full article ">Figure 3
<p>Location of Bam city in Kerman province of Iran (<b>top left</b>), the municipality blocks along with the building footprints overlaid with the pre-disaster satellite image of the study region (<b>bottom left</b>), and the post-disaster satellite image covered by some sample regions showing damage (red) and no-damaged (green) buildings (<b>right</b>).</p>
Full article ">Figure 4
<p>Overview of pre-disaster image acquired from the study area (<b>left</b>) and three samples of pre-disaster (<b>left squares</b>) and post-disaster (<b>middle squares</b>) images along with their corresponding ground truth data (<b>right squares</b>). Green and red show the buildings with no-damage and damaged, respectively.</p>
Full article ">Figure 5
<p>Ground truth map of the study area, along with six zoomed patches for better visualization. Buildings in red, green, and black demonstrate damaged, not damaged, and unclassified classes, respectively.</p>
Full article ">Figure 6
<p>Augmentation techniques were applied to input images for further regularizing the model. On the left, pre- and post-disaster images are displayed, and other images are the outputs of specific augmentations.</p>
Full article ">Figure 7
<p>The overall workflow of our study consists of three major parts: (1) Data Preparation, (2) Damage assessment, which includes Localization and Classification models, and (3) Transferability Analysis.</p>
Full article ">Figure 8
<p>Schematic diagram of a UNet for our building localization stage, which shows different components of our UNet model, including encoder and decoder paths, skip connections, and pre-trained backbones. The output of this model is a binary segmentation map of building footprints.</p>
Full article ">Figure 9
<p>Schematic diagram of a dual-branch UNet network for the building damage assessment method. Each of the pre- and post-disaster images enters a separate branch with shared weights, and the output feature maps are concatenated and inserted into the segmentation head. The output of this stage is a per-building damage classification map.</p>
Full article ">Figure 10
<p>Diagram of the Selective Kernel Module which shows its three stages of Split, Fuse, and Select, for a sample two-branch selective kernel.</p>
Full article ">Figure 11
<p>(<b>a</b>) Interpretation of the confusion matrix into useful evaluation metrics; and (<b>b</b>) visual comparison of IoU and F1-score based on [<a href="#B69-remotesensing-16-00182" class="html-bibr">69</a>].</p>
Full article ">Figure 12
<p>Demonstration of the image preparation steps, i.e., random 256 × 256 patch extraction, augmentation, and arrangement of masks for the damage assessment stage and the corresponding weights for each class, obtained based on their proportional number of samples.</p>
Full article ">Figure 13
<p>Schematic diagram showing various architectures that were used for comparison.</p>
Full article ">Figure 14
<p>Boxplots of evaluation metrics used to compare the localization models.</p>
Full article ">Figure 15
<p>Visual comparison of different localization methods used in our paper.</p>
Full article ">Figure 16
<p>Comparison of the classification models in the damage assessment stage. Dashed and connected lines are for training and validation sets, respectively. Models 1 to 4 are colored red, cyan, yellow, and black, respectively.</p>
Full article ">Figure 17
<p>Visual comparison of damage assessment results from four models used in our study.</p>
Full article ">Figure 18
<p>Building damage assessment transferability results for seven regions in the Bam earthquake dataset (<b>a</b>–<b>g</b>). Pre- and post-disaster images are shown on the left. Probability maps for no-damage and destroyed classes are shown in the middle with the relevant color map. The building damage classification map is demonstrated on the right, and the ground truth data for each building is overlaid with green or red borders around it.</p>
Full article ">Figure 19
<p>Visualizing imaging properties in 19 disaster events of the xBD dataset, before and after the disasters.</p>
Full article ">Figure 20
<p>Distribution of influencing parameters and the thresholds specified for conducting each experiment. The number of images for each experiment is presented next to the curly brackets. Subfigures are displaying (<b>a</b>) histogram of pre-disaster Sun elevation angles, (<b>b</b>) off-nadir angle vs. ground resolution, (<b>c</b>) histogram of differences between pre- and post-disaster Sun elevation angles, (<b>d</b>) histogram of differences between pre- and post-disaster GSDs, (<b>e</b>) histogram of differences between pre- and post-disaster off-nadir angles. The blue circles are showing the specified threshold which has been used for further analysis.</p>
Full article ">Figure 21
<p>The relation between different parameters and the “localization” quality metrics.</p>
Full article ">Figure 22
<p>The relation between different relative parameters and the “classification” quality metrics.</p>
Full article ">Figure 23
<p>The relation between “disaster types” and the “classification” quality metrics.</p>
Full article ">
22 pages, 6263 KiB  
Article
Intelligent Environment-Adaptive GNSS/INS Integrated Positioning with Factor Graph Optimization
by Zhengdao Li, Pin-Hsun Lee, Tsz Hin Marcus Hung, Guohao Zhang and Li-Ta Hsu
Remote Sens. 2024, 16(1), 181; https://doi.org/10.3390/rs16010181 - 31 Dec 2023
Cited by 5 | Viewed by 3260
Abstract
Global navigation satellite systems (GNSSs) applied to intelligent transport systems in urban areas suffer from multipath and non-line-of-sight (NLOS) effects due to the signal reflections from high-rise buildings, which seriously degrade the accuracy and reliability of vehicles in real-time applications. Accordingly, the integration [...] Read more.
Global navigation satellite systems (GNSSs) applied to intelligent transport systems in urban areas suffer from multipath and non-line-of-sight (NLOS) effects due to the signal reflections from high-rise buildings, which seriously degrade the accuracy and reliability of vehicles in real-time applications. Accordingly, the integration between GNSS and inertial navigation systems (INSs) could be utilized to improve positioning performance. However, the fixed GNSS solution uncertainty of the conventional integration method cannot determine the fluctuating GNSS reliability in fast-changing urban environments. This weakness becomes solvable using a deep learning model for sensing the ambient environment intelligently, and it can be further mitigated using factor graph optimization (FGO), which is capable of generating robust solutions based on historical data. This paper mainly develops the adaptive GNSS/INS loosely coupled system on FGO, along with the fixed-gain Kalman filter (KF) and adaptive KF (AKF) being taken as comparisons. The adaptation is aided by a convolutional neural network (CNN), and the feasibility is verified using data from different grades of receivers. Compared with the integration using fixed-gain KF, the proposed adaptive FGO (AFGO) maintains the 100% positioning availability and reduces the overall 2D positioning error by up to 70% in the aspects of both root mean square error (RMSE) and standard deviation (STD). Full article
(This article belongs to the Special Issue Remote Sensing in Urban Positioning and Navigation)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed AFGO approach.</p>
Full article ">Figure 2
<p>The schematics of a GNSS/INS factor graph. Circles and rectangles represent the states and factors, respectively. The example section at epoch <span class="html-italic">t</span> is displayed to illustrate formulations from (<a href="#FD17-remotesensing-16-00181" class="html-disp-formula">17</a>) to (<a href="#FD22-remotesensing-16-00181" class="html-disp-formula">22</a>).</p>
Full article ">Figure 3
<p>Overview of the proposed CNN network, where <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>C</mi> <mi>N</mi> <mi>N</mi> </mrow> </msub> </semantics></math> denotes output from the CNN model.</p>
Full article ">Figure 4
<p>Illustration of the adaptive uncertainty mechanism.</p>
Full article ">Figure 5
<p>The anticlockwise ground truth trajectory (green) in Kowloon Bay, Hong Kong. It is separated by breakpoints 1–5 (marked in red) and junctions A, B, and C (marked in yellow). The light, medium, and harsh urban areas are shaded in blue, orange, and purple backgrounds, respectively.</p>
Full article ">Figure 6
<p>CNN predicted and true 2D positioning error for the three GNSS receivers. The subplot for the Xiaomi receiver has been zoomed in for better illustration. The highest 2D error for the Xiaomi receiver is around 170 m.</p>
Full article ">Figure 7
<p>Trajectories of different positioning solutions using U-Blox data, with the large drift at junction C being pointed out by the magenta arrow.</p>
Full article ">Figure 8
<p>2D positioning errors of different positioning solutions using U-Blox data throughout 920 epochs with the periods at junctions A, B, and C being shaded in gray. The upper subplot depicts the full range of 2D errors, while the lower one shows the zoom-in view from 0 to 100 m.</p>
Full article ">Figure 9
<p>Trajectories of different positioning solutions using Xiaomi data, with the large drift at junction C being pointed out by the magenta arrow.</p>
Full article ">Figure 10
<p>2D positioning errors of different positioning solutions using Xiaomi data throughout 920 epochs, with the periods at junctions A, B, and C shaded in gray. The upper subplot depicts the full range of 2D errors, while the lower one shows the zoom-in view from 0 to 100 m.</p>
Full article ">Figure 11
<p>Trajectories of different positioning solutions using NovAtel data, with the large drift at junctions B and C being pointed out by the magenta arrow.</p>
Full article ">Figure 12
<p>2D positioning errors of different positioning solutions using NovAtel data throughout 920 epochs, with the periods at junctions A, B, and C shaded in gray. The upper subplot depicts the full range of 2D errors, while the lower one shows the zoom-in view from 0 to 100 m.</p>
Full article ">Figure 13
<p>(<b>a</b>) GNSS availability of the three receivers; improvement of solution performance compared with KF in terms of (<b>b</b>) 2D RMSE and (<b>c</b>) 2D STD. The only anomaly is that for the AKF method, Xiaomi has greater improvements than NovAtel. This is expected because greater improvement comes with lower GNSS availability, which is based on empirical observation rather than strict rules.</p>
Full article ">Figure 14
<p>Summary of integration methods in terms of the 2D RMSE and STD for the three receivers during all 920 epochs. The data regarding the RMSE and STD come from <a href="#remotesensing-16-00181-t005" class="html-table">Table 5</a>, <a href="#remotesensing-16-00181-t006" class="html-table">Table 6</a> and <a href="#remotesensing-16-00181-t007" class="html-table">Table 7</a>.</p>
Full article ">Figure 15
<p>Summary of integration methods in terms of the 2D RMSE and STD for the three receivers during all 920 epochs.</p>
Full article ">Figure 16
<p>GNSS solution uncertainties of fixed integration (red) and adaptive integration (blue) and the true 2D positioning error indicating the truth uncertainty (green) throughout the time frame for the three receivers. For each receiver, the uncertainties during the GNSS-unavailable period are omitted on all three curves, and only INS propagation is processed. Moreover, the epochs without CNN predictions are particularly omitted on the truth uncertainty curve.</p>
Full article ">Figure 17
<p>Zoom-in subfigures from <a href="#remotesensing-16-00181-f016" class="html-fig">Figure 16</a> during the period 162–167 (second) for the receiver of (<b>a</b>) U-Blox, (<b>b</b>) Xiaomi, and (<b>c</b>) NovAtel.</p>
Full article ">
Previous Issue
Back to TopTop