[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (834)

Search Parameters:
Keywords = radar backscatter

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 46447 KiB  
Article
Improved Coherent Processing of Synthetic Aperture Radar Data through Speckle Whitening of Single-Look Complex Images
by Luciano Alparone, Alberto Arienzo and Fabrizio Lombardini
Remote Sens. 2024, 16(16), 2955; https://doi.org/10.3390/rs16162955 - 12 Aug 2024
Abstract
In this study, we investigate the usefulness of the spectral whitening procedure, devised by one of the authors as a preprocessing stage of envelope-detected single-look synthetic aperture radar (SAR) images, in application contexts where phase information is relevant. In the first experiment, each [...] Read more.
In this study, we investigate the usefulness of the spectral whitening procedure, devised by one of the authors as a preprocessing stage of envelope-detected single-look synthetic aperture radar (SAR) images, in application contexts where phase information is relevant. In the first experiment, each of the raw datasets of an interferometric pair of COSMO-SkyMed images, representing industrial buildings amidst vegetated areas, was individually (1) synthesized by the SAR processor without Fourier-domain Hamming windowing; (2) synthesized with Hamming windowing, used to improve the focalization of targets, with the drawback of spatially correlating speckle; and (3) processed for the whitening of complex speckle, using the data obtained in (2). The interferograms were produced in the three cases, and interferometric coherence and phase maps were calculated through 3 × 3 boxcar filtering. In (1), coherence is low on vegetation; the presence of high sidelobes in the system’s point-spread function (PSF) causes the spread of areas featuring high backscattering. In (2), point targets and buildings are better defined, thanks to the sidelobe suppression achieved by the frequency windowing, but the background coherence is abnormally increased because of the spatial correlation introduced by the Hamming window. Case (3) is the most favorable because the whitening operation results in low coherence in vegetation and high coherence in buildings, where the effects of windowing are preserved. An analysis of the phase map reveals that (3) is likely to be facilitated also in terms of unwrapping. Results are presented on a TerraSAR-X/TanDEM-X (TSX-TDX) image pair by processing the interferograms of original and whitened data using a non-local filter. The main results are as follows: (1) with autocorrelated speckle, the estimation error of coherence may attain 16% and inversely depends on the heterogeneity of the scene; and (2) the cleanness and accuracy of the phase are increased by the preliminary whitening stage, as witnessed by the number of residues, reduced by 24%. Benefits are also expected not only for differential InSAR (DInSAR) but also for any coherent analysis and processing carried out performed on SLC data. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of SAR system (onboard sensor and on-ground processor) followed by optional whitening stage.</p>
Full article ">Figure 2
<p>Power spectra: (<b>a</b>) periodogram of SLC data correlated in slant-range direction; (<b>b</b>) frequency response of inverse filter; (<b>c</b>) periodogram of SLC data in (<b>a</b>) after whitening with the filter in (<b>b</b>).</p>
Full article ">Figure 3
<p>Effects of SAR processor: (<b>a</b>) spatially correlated speckle originating from frequency windowing; (<b>b</b>) correlated speckle whitened using the inverse filter in <a href="#remotesensing-16-02955-f002" class="html-fig">Figure 2</a>b; (<b>c</b>) example of a point target focused without a tapering window (negative grayscale).</p>
Full article ">Figure 4
<p>Interferometric pair of images of Pomigliano, master image: (<b>a</b>) processed without a Hamming window; (<b>b</b>) processed with a Hamming window; (<b>c</b>) processed with a Hamming window and subsequently whitened.</p>
Full article ">Figure 5
<p>Coherence maps of Pomigliano estimated using 3 × 3 boxcar filtering: (<b>a</b>) SLC pair processed without a Hamming window; (<b>b</b>) SLC pair processed with a Hamming window; (<b>c</b>) SLC pair processed with a Hamming window and subsequently whitened.</p>
Full article ">Figure 6
<p>Maps of the interferometric phase of Pomigliano estimated using a 3 × 3 sliding window: (<b>a</b>) SLC pair processed without a Hamming window; (<b>b</b>) SLC pair processed with a Hamming window; (<b>c</b>) SLC pair processed with a Hamming window and subsequently whitened.</p>
Full article ">Figure 7
<p>Unfiltered modulus of interferogram of TSX-TDX SLC pair of the Euskirchen test site.</p>
Full article ">Figure 8
<p>Modulus of interferogram of Euskirchen filtered using NL-INSAR: (<b>a</b>) from non-whitened SLC pair; (<b>b</b>) from whitened SLC pair.</p>
Full article ">Figure 9
<p>Coherence maps of Euskirchen estimated using NL-InSAR: (<b>a</b>) from non-whitened SLC pair; (<b>b</b>) from whitened SLC pair.</p>
Full article ">Figure 10
<p>Difference in coherence calculated from whitened and original data of Euskirchen: the overestimation due to correlation reaches 16% in homogeneous areas.</p>
Full article ">Figure 11
<p>Maps of interferometric phases of Euskirchen estimated using NL-InSAR: (<b>a</b>) from non-whitened SLC pair; (<b>b</b>) from whitened SLC pair.</p>
Full article ">Figure 12
<p>Phase residues overlaid on the coherence map of Euskirchen estimated using NL-InSAR: (<b>a</b>) from non-whitened SLC pair; (<b>b</b>) from whitened SLC pair.</p>
Full article ">
21 pages, 3577 KiB  
Article
Exploring Distributed Scatterers Interferometric Synthetic Aperture Radar Attributes for Synthetic Aperture Radar Image Classification
by Mingxuan Wei, Yuzhou Liu, Chuanhua Zhu and Chisheng Wang
Remote Sens. 2024, 16(15), 2802; https://doi.org/10.3390/rs16152802 - 31 Jul 2024
Viewed by 263
Abstract
Land cover classification of Synthetic Aperture Radar (SAR) imagery is a significant research direction in SAR image interpretation. However, due to the unique imaging methodology of SAR, interpreting SAR images presents numerous challenges, and land cover classification using SAR imagery often lacks innovative [...] Read more.
Land cover classification of Synthetic Aperture Radar (SAR) imagery is a significant research direction in SAR image interpretation. However, due to the unique imaging methodology of SAR, interpreting SAR images presents numerous challenges, and land cover classification using SAR imagery often lacks innovative features. Distributed scatterers interferometric synthetic aperture radar (DS-InSAR), a common technique for deformation extraction, generates several intermediate parameters during its processing, which have a close relationship with land features. Therefore, this paper utilizes the coherence matrix, the number of statistically homogeneous pixels (SHPs), and ensemble coherence, which are involved in DS-InSAR as classification features, combined with the backscatter intensity of multi-temporal SAR imagery, to explore the impact of these features on the discernibility of land objects in SAR images. The results indicate that the adopted features improve the accuracy of land cover classification. SHPs and ensemble coherence demonstrate significant importance in distinguishing land features, proving that these proposed features can serve as new attributes for land cover classification in SAR imagery. Full article
(This article belongs to the Section Remote Sensing and Geo-Spatial Science)
Show Figures

Figure 1

Figure 1
<p>Research area and schematic diagram of classified land features. (<b>a</b>) Optical image. The red frame indicates the research area. (<b>b</b>) Averaged SAR intensity image. (<b>c</b>–<b>g</b>) represent the optical images of classified land features within the research area.</p>
Full article ">Figure 2
<p>The PA and UA for different land feature categories under the five feature combinations.</p>
Full article ">Figure 3
<p>The confusion matrices for the five feature combinations. (<b>a</b>) Time-series backscatter intensity feature combination (Mli). (<b>b</b>) Backscatter and coherence matrix combination (Mli + CohM). (<b>c</b>) Backscatter and statistically homogeneous pixel number combination (Mli + Bro). (<b>d</b>) Backscatter and ensemble coherence combination (Mli + Pcoh). (<b>e</b>) Combination of all features: backscatter, coherence matrix, statistically homogeneous pixel number, and ensemble coherence (Mli + CohM + Bro + Pcoh).</p>
Full article ">Figure 4
<p>The Pearson correlation matrix for the selected features. Mli represents the time−series backscatter, Bro represents the number of statistically homogeneous pixels, Pcoh represents the ensemble coherence, and Coh represents the PCA−processed coherence.</p>
Full article ">Figure 5
<p>The mapping results of the five feature combinations. (<b>a</b>) Time-series backscatter intensity feature mapping (Mli). (<b>b</b>) Backscatter and coherence matrix combination mapping (Mli + CohM). (<b>c</b>) Backscatter and statistically homogeneous pixel number combination mapping (Mli + Bro). (<b>d</b>) Backscatter and ensemble coherence combination mapping (Mli + Pcoh). (<b>e</b>) Mapping with all features combined: backscatter, coherence matrix, statistically homogeneous pixel number, and ensemble coherence (Mli + CohM + Bro + Pcoh).</p>
Full article ">Figure 6
<p>The feature importance of the five combinations. (<b>a</b>) Time-series backscatter intensity feature importance (Mli); red represents the top three most important intensity features. (<b>b</b>) Backscatter and coherence matrix combination feature importance (Mli + CohM); green represents the highest-scoring coherence features. (<b>c</b>) Backscatter and statistically homogeneous pixel number combination feature importance (Mli + Bro); brown represents the scores for the number of statistically homogeneous pixels. (<b>d</b>) Backscatter and ensemble coherence combination feature importance (Mli + Pcoh); purple represents the scores for ensemble coherence. (<b>e</b>) Feature importance for the combination of all features: backscatter, coherence matrix, statistically homogeneous pixel number, and ensemble coherence (Mli + CohM + Bro + Pcoh).</p>
Full article ">Figure 7
<p>The time-series scattering intensity of the five types of land features.</p>
Full article ">Figure 8
<p>PA and UA of five kinds of ground objects under windows of different sizes with homogenous particles.</p>
Full article ">
27 pages, 8943 KiB  
Article
How Phenology Shapes Crop-Specific Sentinel-1 PolSAR Features and InSAR Coherence across Multiple Years and Orbits
by Johannes Löw, Steven Hill, Insa Otte, Michael Thiel, Tobias Ullmann and Christopher Conrad
Remote Sens. 2024, 16(15), 2791; https://doi.org/10.3390/rs16152791 - 30 Jul 2024
Viewed by 403
Abstract
Spatial information about plant health and productivity are essential when assessing the progress towards Sustainable Development Goals such as life on land and zero hunger. Plant health and productivity are strongly linked to a plant’s phenological progress. Remote sensing, and since the launch [...] Read more.
Spatial information about plant health and productivity are essential when assessing the progress towards Sustainable Development Goals such as life on land and zero hunger. Plant health and productivity are strongly linked to a plant’s phenological progress. Remote sensing, and since the launch of Sentinel-1 (S1), specifically, radar-based frameworks have been studied for the purpose of monitoring phenological development. This study produces insights into how crop phenology shapes S1 signatures of PolSAR features and InSAR coherence of wheat, canola, sugar beet. and potato across multiple years and orbits. Hereby, differently smoothed time series and a base line of growing degree days are stacked to estimate the patterns of occurrence of extreme values and break points. These patterns are then linked to in situ observations of phenological developments. The comparison of patterns across multiple orbits and years reveals that a single optimized fit hampers the tracking capacities of an entire season monitoring framework, as does the sole reliance on extreme values. VV and VH backscatter intensities outperform all other features, but certain combinations of phenological stage and crop type are better covered by a complementary set of PolSAR features and coherence. With regard to PolSAR features, alpha and entropy can be replaced by the cross-polarization ratio for tracking certain stages. Moreover, a range of moderate incidence angles is better suited for monitoring crop phenology. Also, wheat and canola are favored by a late afternoon overpass. In sum, this study provides insights into phenological developments at the landscape level that can be of further use when investigating spatial and temporal variations within the landscape. Full article
(This article belongs to the Special Issue Cropland Phenology Monitoring Based on Cloud-Computing Platforms)
Show Figures

Figure 1

Figure 1
<p>Map of InVeKoS data 2020 for DEMMIN and the selected crops: winter wheat, sugar beet, canola, and potato. Top right corner: extent of the AOI in Mecklenburg Western Pomerania. Center right: extent in relation to footprint of relative orbits.</p>
Full article ">Figure 2
<p>Essential steps of the analysis per orbit and year separated by field and landscape level.</p>
Full article ">Figure 3
<p>Schematic depiction of estimating temporal density (TSM occurrence plot) of TSM occurrence at the field scale. The dimensions of content of the analysis encompass five years (2017–2021), three relative orbits (146,168, 95), and seven S1 features. The smoothing span ranges from 0.05 to 0.5 in steps of 0.05, resulting in eleven (n = 11) time series per field.</p>
Full article ">Figure 4
<p>Exemplary yearly crop signatures for each crop type of VV backscatter with locations of their extrema. Signatures were smoothed by LOESS with span 0.2.</p>
Full article ">Figure 5
<p>Schematic depiction of the analyses at the landscape level containing the pattern extraction and the derivation of trackable stages. This was applied for time series originating from different years and/or orbits of the same crop type and S1 feature to enable the comparison of their respective TSM distributions. These comparisons allow for the derivation of common phenological patterns across years and orbits for each crop type and S1 feature.</p>
Full article ">Figure 6
<p>Orbit-specific patterns of major signal changes at landscape level tracked by break points according to day of year (DOY; <span class="html-italic">x</span>-axis) and artificial growing degree day (GDDsim) values (<span class="html-italic">y</span>-axis) in relation to the corresponding five-year mean GDDsim value of BBCH stadia observed by DWD at landscape level from 2018 and 2020. Exemplary illustration for fields of wheat. Temporal uncertainties around BBCH stadia are marked by grey areas. Exemplary illustration for fields of winter wheat.</p>
Full article ">Figure 7
<p>Year-wise count of S1 features producing break points (Y.) that closely track phenological stages by crop type and by their respective distribution of GDD values (GD.) at the landscape level which is overlaid by the GDD values of BBCH in situ observations (colored areas).</p>
Full article ">Figure 8
<p>Orbit, stage, and crop-specific offsets of break points at landscape level in days, displaying their mean deviation from in situ observations and temporal variance (standard deviation) by crop type and BBCH stage, containing only tracked events that were labeled reliable by the threshold approach.</p>
Full article ">Figure 9
<p>Orbit, stage, and crop-specific offsets of maxima at landscape level in days, displaying their mean deviation from in situ observations and temporal variance (standard deviation) by crop type and BBCH stage, containing only tracked events that were labeled reliable by the threshold approach.</p>
Full article ">Figure 10
<p>Orbit, stage, and crop-specific offsets of minima at landscape level in days, displaying their mean deviation from in situ observations and temporal variance (standard deviation) by crop type and BBCH stage, containing only tracked events that were labeled reliable by the threshold approach.</p>
Full article ">Figure A1
<p>Orbit-specific patterns of major signal changes at landscape level tracked by break points according to day of year (DOY; <span class="html-italic">x</span>-axis) and artificial growing degree day (GDDsim) values (<span class="html-italic">y</span>-axis) in relation to the corresponding five-year mean GDD value of BBCH stadia observed by DWD at landscape level from 2017 to 2021. Exemplary illustration for fields of winter wheat.</p>
Full article ">Figure A2
<p>Orbit-specific patterns of major signal changes by maxima by day of year (DOY; <span class="html-italic">x</span>-axis) and growing degree day (GDDsim) values (<span class="html-italic">y</span>-axis) in relation to the corresponding five-year mean GDDsim value of BBCH stadia observed by DWD at landscape level from 2017 to 2021. Exemplary illustration for fields of winter wheat.</p>
Full article ">Figure A3
<p>Year-wise count of S1 features producing maxima (Y.) that closely track phenological stages by crop type and by their respective distribution of GDD values (GD.)at the landscape level which is overlaid by the GDD values of BBCH in situ observations (colored areas).</p>
Full article ">Figure A4
<p>Orbit-specific patterns of major signal changes by minima by day of year (DOY; <span class="html-italic">x</span>-axis) and growing degree day (GDDsim) values (<span class="html-italic">y</span>-axis) in relation to the corresponding five-year mean GDDsim value of BBCH stadia observed by DWD at landscape level from 2017 to 2021. Exemplary illustration for fields of winter wheat.</p>
Full article ">Figure A5
<p>Year-wise count of S1 features producing minima (Y.) that closely track phenological stages by crop type and by their respective distribution of GDD values (GD.)at the landscape level which is overlaid by the GDD values of BBCH in situ observations (colored areas).</p>
Full article ">
18 pages, 11836 KiB  
Article
Flood Mapping of Synthetic Aperture Radar (SAR) Imagery Based on Semi-Automatic Thresholding and Change Detection
by Fengkai Lang, Yanyin Zhu, Jinqi Zhao, Xinru Hu, Hongtao Shi, Nanshan Zheng and Jianfeng Zha
Remote Sens. 2024, 16(15), 2763; https://doi.org/10.3390/rs16152763 - 29 Jul 2024
Viewed by 375
Abstract
Synthetic aperture radar (SAR) technology has become an important means of flood monitoring because of its large coverage, repeated observation, and all-weather and all-time working capabilities. The commonly used thresholding and change detection methods in emergency monitoring can quickly and simply detect floods. [...] Read more.
Synthetic aperture radar (SAR) technology has become an important means of flood monitoring because of its large coverage, repeated observation, and all-weather and all-time working capabilities. The commonly used thresholding and change detection methods in emergency monitoring can quickly and simply detect floods. However, these methods still have some problems: (1) thresholding methods are easily affected by low backscattering regions and speckle noise; (2) changes from multi-temporal information include urban renewal and seasonal variation, reducing the precision of flood monitoring. To solve these problems, this paper presents a new flood mapping framework that combines semi-automatic thresholding and change detection. First, multiple lines across land and water are drawn manually, and their local optimal thresholds are calculated automatically along these lines from two ends towards the middle. Using the average of these thresholds, the low backscattering regions are extracted to generate a preliminary inundation map. Then, the neighborhood-based change detection method combined with entropy thresholding is adopted to detect the changed areas. Finally, pixels in both the low backscattering regions and the changed regions are marked as inundated terrain. Two flood datasets, one from Sentinel-1 in the Wharfe and Ouse River basin and another from GF-3 in Chaohu are chosen to verify the effectiveness and practicality of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow of the proposed flood mapping framework.</p>
Full article ">Figure 2
<p>Boundary line pixel search.</p>
Full article ">Figure 3
<p>Eight-neighborhood gradient estimation templates with <span class="html-italic">p</span> as the central pixel and <span class="html-italic">p</span>’ as the adjacent pixel. The pixel estimation of <span class="html-italic">p</span> and <span class="html-italic">p</span>’ is computed in the Manhattan distance d = 1. (<b>a</b>) left-right template; (<b>b</b>) up-down template; (<b>c</b>) top-left and bottom-right template; (<b>d</b>) top-right and bottom-left template.</p>
Full article ">Figure 4
<p>York data set: (<b>a</b>) reference SAR image; (<b>b</b>) flood SAR image; (<b>c</b>) ground truth map.</p>
Full article ">Figure 5
<p>Chaohu data set: (<b>a</b>) flood SAR image; (<b>b</b>) flood optical image; (<b>c</b>) ROI-1; (<b>d</b>) ROI-2.</p>
Full article ">Figure 6
<p>The position of T in the intensity histogram of flood SAR image in York.</p>
Full article ">Figure 7
<p>Experimental results of the proposed framework: (<b>a</b>) preliminary inundation map; (<b>b</b>) difference image; (<b>c</b>) change map; (<b>d</b>) flood map.</p>
Full article ">Figure 8
<p>Positions and profiles of the four manually drawn lines in ROI-1. (<b>a</b>) Positions and numbers of the manually drawn lines. (<b>b</b>) Profile of ①; (<b>c</b>) profile of ②; (<b>d</b>) profile of ③; (<b>e</b>) profile of ④.</p>
Full article ">Figure 9
<p>Experimental results of ROI-1. (<b>a</b>) Otsu; (<b>b</b>) K&amp;I; (<b>c</b>) semi-automatic thresholding; (<b>d</b>) ground truth map.</p>
Full article ">Figure 10
<p>Experimental results of ROI-2. (<b>a</b>) Otsu; (<b>b</b>) K&amp;I; (<b>c</b>) semi-automatic thresholding; (<b>d</b>) ground truth map.</p>
Full article ">Figure 11
<p>Flood maps obtained by different methods: (<b>a</b>) K-means; (<b>b</b>) decision tree (DT); (<b>c</b>) rule-based object-oriented classification (RBOO); (<b>d</b>) neural network (NN); (<b>e</b>) support vector machine (SVM).</p>
Full article ">Figure 12
<p>Comparison of flood map and ground truth map in York area.</p>
Full article ">
15 pages, 5966 KiB  
Article
Research on a Near-Field Millimeter Wave Imaging Algorithm and System Based on Multiple-Input Multiple-Output Sparse Sampling
by He Zhang, Hua Zong and Jinghui Qiu
Photonics 2024, 11(8), 698; https://doi.org/10.3390/photonics11080698 - 27 Jul 2024
Viewed by 307
Abstract
In order to reduce the hardware cost and data acquisition time in near-field scenarios, such as airport security imaging systems, this paper discusses the layout of a multiple-input multiple-output (MIMO) radar array. In view of the existing multi-input multiple-output imaging algorithm, the reconstructed [...] Read more.
In order to reduce the hardware cost and data acquisition time in near-field scenarios, such as airport security imaging systems, this paper discusses the layout of a multiple-input multiple-output (MIMO) radar array. In view of the existing multi-input multiple-output imaging algorithm, the reconstructed image artifacts and aliasing problems caused by sparse sampling are discussed. In this paper, a multi-station radar array and a corresponding sparse MIMO imaging algorithm based on combined sparse sub-channels are proposed. By studying the wave–number spectrum of backscattered MIMO synthetic aperture radar (SAR) data, the nonlinear relationship between the wave number spectrum and reconstructed image is established. By selecting a complex gain vector, multiple channels are coherently combined effectively, thus eliminating aliasing and artifacts in the reconstructed image. At the same time, the algorithm can be used for the MIMO–SAR configuration of arbitrarily distributed transmitting and receiving arrays. A new multi-station millimeter wave imaging system is designed by using a frequency-modulated continuous wave (FMCW) chip and sliding rail platform as a planar SAR. The combination of the hardware system provides reconfiguration, convenience and economy for the combination of millimeter wave imaging systems in multiple scenes. Full article
(This article belongs to the Section Optoelectronics and Optical Materials)
Show Figures

Figure 1

Figure 1
<p>MIMO millimeter wave near-field imaging schematic. The vertical distance between the target plane and the scanning plane is z<sub>0</sub>.</p>
Full article ">Figure 2
<p>X-axis spectrum data distribution.</p>
Full article ">Figure 3
<p>Multi-channel coherent overlay image reconstruction process.</p>
Full article ">Figure 4
<p>IWR1843 Transceiver-integrated chip and data acquisition circuit board.</p>
Full article ">Figure 5
<p>FMCW chirp signal.</p>
Full article ">Figure 6
<p>MIMO radar imaging system.</p>
Full article ">Figure 7
<p>MIMO imaging platform.</p>
Full article ">Figure 8
<p>Experimental millimeter wave radar scanning diagram.</p>
Full article ">Figure 9
<p>HIT metal plate physical picture.</p>
Full article ">Figure 10
<p>(<b>a</b>) Imaging results under single channel; (<b>b</b>) imaging results without multi-channel coherent; (<b>c</b>) compressed sensing imaging (TSA); (<b>d</b>) a complex gain vector is introduced for coherent combination of multiple channels.</p>
Full article ">Figure 11
<p>Size of the metal target.</p>
Full article ">Figure 12
<p>(<b>a</b>) Imaging results under single channel; (<b>b</b>) coherently stacked channels 1, 5, and 9; (<b>c</b>) coherently stacked channels 1, 3, 5, 7, and 9.</p>
Full article ">
11 pages, 7143 KiB  
Article
A Broadband Meta-Absorber for Curved Terahertz Stealth Applications
by Saima Hafeez, Jianguo Yu, Fahim Aziz Umrani, Abdul Majeed and Wang Yun
Electronics 2024, 13(15), 2966; https://doi.org/10.3390/electronics13152966 - 27 Jul 2024
Viewed by 357
Abstract
Metasurface absorbers have shown significant potential in stealth applications due to their adaptability and capacity to reduce the backscattering of electromagnetic (EM) waves. Nevertheless, due to the materials used in the terahertz (THz) range, simultaneously achieving excellent stealth performance in ultrawideband remains an [...] Read more.
Metasurface absorbers have shown significant potential in stealth applications due to their adaptability and capacity to reduce the backscattering of electromagnetic (EM) waves. Nevertheless, due to the materials used in the terahertz (THz) range, simultaneously achieving excellent stealth performance in ultrawideband remains an important and difficult challenge to overcome. In this study, an ultrawideband absorber is proposed based on indium tin oxide (ITO) and polyethylene-terephthalate (PET), with a structure thickness of only 0.16λ. ITO sheets are utilized to achieve broad-spectrum, optical transparency and flexibility of the metasurface. The results show that absorption higher than 90% can be achieved in the frequency band ranging from 1.75 to 5 THz under normal TE and TM polarizations, which covers a wide THz band. The structure is insensitive to polarization angles and exhibits 97% relative bandwidth above 90% efficiency up to an oblique incident angle of 60°. To further validate the efficiency of the absorption performance, the radar cross-section (RCS) reduction investigation was performed on both planar and conformal configurations. The findings show that under normal incidence EM waves, both flat and curved surfaces can achieve RCS reduction of over 10 dB, covering an extremely wide frequency range of 1.75 to 5 THz. The metasurface presented in this study exhibits significant potential for use in several THz applications, including flexible electronic devices and stealth aircraft windows. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematics of a proposed absorber and its meta-atom configuration. (<b>b</b>–<b>d</b>) Design evaluation of a meta-atom.</p>
Full article ">Figure 2
<p>(<b>a</b>) Absorption and reflection coefficient characteristics under TE and TM mode. (<b>b</b>) Results of reflection coefficient using CST and HFSS software. (<b>c</b>) Step−by−step design evaluation absorption.</p>
Full article ">Figure 3
<p>(<b>a</b>) Normalized impedance. Surface current distribution on (<b>b</b>) top and (<b>c</b>) bottom layer.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic diagram of ECM in ADS; (<b>b</b>) the S−parameters from ADS and CST simulations.</p>
Full article ">Figure 5
<p>Effect on absorption spectrum by varying (<b>a</b>) <span class="html-italic">R</span>, (<b>b</b>) <span class="html-italic">h</span>, (<b>c</b>–<b>f</b>) <span class="html-italic">l</span>, <span class="html-italic">w</span>, <span class="html-italic">s</span>, and <span class="html-italic">g</span> of resistive ITO layer.</p>
Full article ">Figure 6
<p>Absorption spectra for (<b>a</b>) polarization angles and (<b>b</b>) oblique incident angles.</p>
Full article ">Figure 7
<p>Illustration of EM wave on planar and cylindrical surface.</p>
Full article ">Figure 8
<p>(<b>a</b>) Monostatic RCS of the copper plate and planar OTFM. (<b>b</b>) RCS reduction in OTFM for flat and curved structures.</p>
Full article ">
17 pages, 3792 KiB  
Article
Mapping Ratoon Rice Fields Based on SAR Time Series and Phenology Data in Cloudy Regions
by Yuechen Li, Rongkun Zhao and Yue Wang
Remote Sens. 2024, 16(15), 2703; https://doi.org/10.3390/rs16152703 - 24 Jul 2024
Viewed by 331
Abstract
Ratoon rice (RR) has emerged as an active adaptation to climate uncertainty, stabilizing total paddy rice yield and effectively reducing agriculture-related ecological environmental issues. However, identifying key remote sensing parameters for RR under cloudy and foggy conditions is challenging, and existing RR monitoring [...] Read more.
Ratoon rice (RR) has emerged as an active adaptation to climate uncertainty, stabilizing total paddy rice yield and effectively reducing agriculture-related ecological environmental issues. However, identifying key remote sensing parameters for RR under cloudy and foggy conditions is challenging, and existing RR monitoring methods in these regions face significant uncertainties. Here, given the sensitivity of synthetic aperture radar (SAR) backscattering signals to the crop phenological period, this paper introduces a threshold model utilizing Sentinel-1A SAR data and phenological information for mapping RR. The Yongchuan District of Chongqing, which is often cloudy and foggy, was selected as a specific study region where VH-polarized backscatter coefficients of Sentinel-1 images were obtained at 10 m spatial resolution in 2020. Based on the proposed threshold model, the RR extraction overall accuracy was up to 90.24%, F1 score was 0.92, and Kappa coefficient was 0.80. Further analysis showed that the extracted RR boundaries exhibited high consistency with true Sentinel-2 remote sensing images and the RR extracted area was in good agreement with the actual planted area situation. This threshold model demonstrated good applicability in the studied cloudy and foggy region, and successfully distinguished RR from other paddy rice types. The methodological framework established in this study provides a basis for extensive application in China and other significant RR-producing regions globally. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Geographic distribution of RR and non-RR samples in Yongchuan District; (<b>b</b>) Typical photographs of field survey of RR in 2020; (<b>c</b>) Weather conditions of Yongchuan District in 2020. (This figure is based on reference [<a href="#B15-remotesensing-16-02703" class="html-bibr">15</a>]).</p>
Full article ">Figure 2
<p>RR cropping calendar of the study region: F, M, and L represent the first, middle, and last parts of each month, respectively. (This figure is based on reference [<a href="#B15-remotesensing-16-02703" class="html-bibr">15</a>]).</p>
Full article ">Figure 3
<p>Schematic flow of RR mapping.</p>
Full article ">Figure 4
<p>VH: VH backscatter time series curve of RR by using Lee sigma filtering; VV: VV backscatter time series curve of RR by using Lee sigma filtering; SG_VH: VH backscatter time series curve of RR by using Lee sigma filtering and S-G filtering; SG_VV: VV backscatter time series curve of RR by using Lee sigma filtering and S-G filtering. The troughs and peaks on the curves were determined by the backscattering coefficient corresponding to the sample points of RR. The shadow parts are the error range.</p>
Full article ">Figure 5
<p>Temporal characteristics of VH backscatter coefficients across five types of land-cover. The troughs and peaks on the curves were determined by the backscattering coefficient corresponding to the sample points. The shadow parts are the error range.</p>
Full article ">Figure 6
<p>Decision tree model for RR mapping.</p>
Full article ">Figure 7
<p>The RR mapping results for Yongchuan District in 2020. (<b>A</b>) Spatial distribution of RR across the study region, where (<b>a</b>) shows spatial distribution of RR in different towns/streets. (<b>a-1</b>) and (<b>a-3</b>) show the detailed spatial distribution information of RR in two random sub-areas, while (<b>a-2</b>) and (<b>a-4</b>) present optical imagery captured by Sentinel-2 on 4 August 2020 for these same sub-areas. (<b>B</b>) Planting areas of RR in towns/streets of Yongchuan District. Note: 1-Daan Street; 2-Chenshi Street; 3-Shenglihu Street; 4-Jinlong Town; 5-Banqiao Town; 6-Yongrong Town; 7-Hegeng Town; 8-Zhongshanlu Street; 9-Linjiang Town; 10-Xianlong Town; 11-Satellite Lake Street; 12-Ji’an Town; 13-Songji Town; 14-Baofeng Town; 15-Chashan Zhuhai Street; 16-Zhutuo Town; 17-Shuangshi Town; 18-Qingfeng Town; 19-Wujian Town; 20-Laisu Town; 21-Sanjiao Town; 22-Honglu Town; 23-Nandajie Street.</p>
Full article ">
14 pages, 5602 KiB  
Article
Surface Soil Moisture Estimation from Time Series of RADARSAT Constellation Mission Compact Polarimetric Data for the Identification of Water-Saturated Areas
by Igor Zakharov, Sarah Kohlsmith, Jon Hornung, François Charbonneau, Pradeep Bobby and Mark Howell
Remote Sens. 2024, 16(14), 2664; https://doi.org/10.3390/rs16142664 - 21 Jul 2024
Viewed by 400
Abstract
Soil moisture is one of the main factors affecting microwave radar backscatter from the ground. While there are other factors that affect backscatter levels (for instance, surface roughness, vegetation, and incident angle), relative variations in soil moisture can be estimated using space-based, medium [...] Read more.
Soil moisture is one of the main factors affecting microwave radar backscatter from the ground. While there are other factors that affect backscatter levels (for instance, surface roughness, vegetation, and incident angle), relative variations in soil moisture can be estimated using space-based, medium resolution, multi-temporal synthetic aperture radar (SAR). Understanding the distribution and identification of water-saturated areas using SAR soil moisture can be important for wetland mapping. The SAR soil moisture retrieval algorithm provides a relative assessment and requires calibration over wet and dry periods. In this work, relative soil moisture indicators are derived from a time series of the RADARSAT Constellation Mission (RCM) SAR compact polarimetric (CP) data over reclaimed areas of an oil sands mine in Alberta, Canada. An evaluation of the soil moisture product is performed using in situ measurements showing agreement from June to September. The surface scattering component of m-chi CP decomposition and the RL SAR products demonstrated a good agreement with the field data (low RMSE values and a perfect alignment with field-identified wetlands). Full article
(This article belongs to the Special Issue GIS and Remote Sensing in Soil Mapping and Modeling)
Show Figures

Figure 1

Figure 1
<p>Suncor’s Base Mine. WorldView-2/3 satelite data © MAXAR (2022).</p>
Full article ">Figure 2
<p>The local incidence angle (in degrees) estimated for a fragment of the RCM scene acquired on 3 June 2022 (<b>top</b>), initial <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>0</mn> </msup> </mrow> </semantics></math> distribution for all LIA across RCM scenes (<b>middle</b>), and normalized <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>0</mn> </msup> </mrow> </semantics></math> to the reference angle (<b>bottom</b>).</p>
Full article ">Figure 3
<p>Estimated surface soil moisture from SAR (scatter plot) and surface soil moisture measured at the station with daily precipitation (in mm) on the vertical axis shown on the right.</p>
Full article ">Figure 4
<p>Estimated averaged SSM from RCM CP products for the areas corresponded to the previously reclaimed wetlands (shown as black polygons). RADARSAT Constellation Mission Imagery © Government of Canada (2022). RADARSAT is an official mark of the Canadian Space Agency. WorldView-2 image (bottom) © MAXAR.</p>
Full article ">Figure 4 Cont.
<p>Estimated averaged SSM from RCM CP products for the areas corresponded to the previously reclaimed wetlands (shown as black polygons). RADARSAT Constellation Mission Imagery © Government of Canada (2022). RADARSAT is an official mark of the Canadian Space Agency. WorldView-2 image (bottom) © MAXAR.</p>
Full article ">
10 pages, 5049 KiB  
Article
Winter Precipitation Detection Using C- and X-Band Radar Measurements
by Ayano Ueki, Michihiro S. Teshiba, David Schvartzman, Pierre-Emmanuel Kirstetter, Robert D. Palmer, Kohei Osa, Tian-You Yu, Boonleng Cheong and David J. Bodine
Remote Sens. 2024, 16(14), 2630; https://doi.org/10.3390/rs16142630 - 18 Jul 2024
Viewed by 404
Abstract
Winter continues to witness numerous automobile accidents attributed to graupel and hail precipitation in Japan. Detecting these weather phenomena using radar technology holds promise for reducing the impact of such accidents and improving road maintenance operations. Weather radars operating at different frequencies, such [...] Read more.
Winter continues to witness numerous automobile accidents attributed to graupel and hail precipitation in Japan. Detecting these weather phenomena using radar technology holds promise for reducing the impact of such accidents and improving road maintenance operations. Weather radars operating at different frequencies, such as C- and X-band, prove effective in graupel detection by analyzing variations in backscattered signals within the same radar volume. When particle diameters exceed 5 mm, the study of Mie scattering characteristics across different melting ratios reveals insights. The dual frequency ratio (DFR) shows potential for graupel detection. The DFR presents wider variations with ten-times difference in melting ratios with increased density, offering opportunities for precise detection. Additionally, the DFR amplitude rises with temperature changes. However, for hydrometeor diameters below approximately 3 mm, and within the Rayleigh region, the DFR exhibits minimal fluctuations. Hence, this technique is best suited for diameters exceeding 3 mm for optimal efficacy. Additionally, a “detection alert” for graupel/hail has been proposed. Based on this alert, and with realistic rain/graupel size distributions, graupel/hail can be detected with an approximate probability of 70%. Full article
(This article belongs to the Special Issue Advance of Radar Meteorology and Hydrology II)
Show Figures

Figure 1

Figure 1
<p>Occurrences of car accidents caused by graupel-/hail-fall events (translated from [<a href="#B1-remotesensing-16-02630" class="html-bibr">1</a>] into English).</p>
Full article ">Figure 2
<p>Radar cross-section in diameter of particles with C- and X-band frequencies.</p>
Full article ">Figure 3
<p>DFR changes in melting ratio with respect to densities of 0.1, 0.3, 0.5, and 1 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">g</mi> <mo>/</mo> <msup> <mi>cm</mi> <mn>3</mn> </msup> </mrow> </semantics></math> in the temperature of −6 °C. The X-axis represents the melting ratio of hydrometeors, and the Y-axis represents the DFR of the radar backscattering cross-sections at C-band (<math display="inline"><semantics> <msub> <mi>σ</mi> <mi>C</mi> </msub> </semantics></math>) and X-band (<math display="inline"><semantics> <msub> <mi>σ</mi> <mi>X</mi> </msub> </semantics></math>) radar frequencies.</p>
Full article ">Figure 4
<p>DFR changes in melting ratio with respect to temperatures of −6, −3, 0, and 3 °C in the density of 0.3 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">g</mi> <mo>/</mo> <msup> <mi>cm</mi> <mn>3</mn> </msup> </mrow> </semantics></math>. The X-axis represents the melting ratio of hydrometeors, and the Y-axis represents the DFR of the radar backscattering cross-sections at the radar frequencies of C-band (<math display="inline"><semantics> <msub> <mi>σ</mi> <mi>C</mi> </msub> </semantics></math>) and X-band (<math display="inline"><semantics> <msub> <mi>σ</mi> <mi>X</mi> </msub> </semantics></math>).</p>
Full article ">Figure 5
<p>Possibilities of graupel detection with DFR with respect to temperatures of −6, −3, 0, and 3 °C in the density of 0.3 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">g</mi> <mo>/</mo> <msup> <mi>cm</mi> <mn>3</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">
15 pages, 23820 KiB  
Article
Integrated Use of Synthetic Aperture Radar and Optical Data in Mapping Native Vegetation: A Study in a Transitional Brazilian Cerrado–Atlantic Forest Interface
by Allita R. Santos, Mariana A. G. A. Barbosa, Phelipe S. Anjinho, Denise Parizotto and Frederico F. Mauad
Remote Sens. 2024, 16(14), 2559; https://doi.org/10.3390/rs16142559 - 12 Jul 2024
Viewed by 406
Abstract
This study develops a structure for mapping native vegetation in a transition area between the Brazilian Cerrado and the Atlantic Forest from integrated spatial information of Sentinel-1 and Sentinel-2 satellites. Most studies use integrated data to improve classification accuracy in adverse atmospheric conditions, [...] Read more.
This study develops a structure for mapping native vegetation in a transition area between the Brazilian Cerrado and the Atlantic Forest from integrated spatial information of Sentinel-1 and Sentinel-2 satellites. Most studies use integrated data to improve classification accuracy in adverse atmospheric conditions, in which optical data have many errors. However, this method can also improve classifications carried out in landscapes with favorable atmospheric conditions. The use of Sentinel-1 and Sentinel-2 data can increase the accuracy of mapping algorithms and facilitate visual interpretation during sampling by providing more parameters that can be explored to differentiate land use classes with complementary information, such as spectral, backscattering, polarimetry, and interferometry. The study area comprises the Lobo Reservoir Hydrographic Basin, which is part of an environmental conservation unit protected by Brazilian law and with significant human development. LULC were classified using the random forest deep learning algorithm. The classifying attributes were backscatter coefficients, polarimetric decomposition, and interferometric coherence for radar data (Sentinel-1), and optical spectral data, comprising bands in the red edge, near-infrared, and shortwave infrared (Sentinel-2). The attributes were evaluated in three settings: SAR and optical data in separately settings (C1 and C2, respectively) and in an integrated setting (C3). The study found greater accuracy for C3 (96.54%), an improvement of nearly 2% compared to C2 (94.78%) and more than 40% in relation to C1 (55.73%). The classification algorithm encountered significant challenges in identifying wetlands in C1, but performance improved in C3, enhancing differentiation by stratifying a greater number of classes during training and facilitating visual interpretation during sampling. Accordingly, the integrated use of SAR and optical data can improve LULC mapping in tropical regions where occurs biomes interface, as in the transitional Brazilian Cerrado and Atlantic Forest. Full article
Show Figures

Figure 1

Figure 1
<p>South America (<b>a</b>), São Paulo State (<b>b</b>) and Lobo Reservoir Hydrographic Basin (<b>c</b>) localization and points used for field validation.</p>
Full article ">Figure 2
<p>Preprocessing flowchart of LULC classification.</p>
Full article ">Figure 3
<p>SAR attributes and optical data with values for each class.</p>
Full article ">Figure 4
<p>Importance of variables for mean decrease accuracy and mean decrease gini.</p>
Full article ">Figure 5
<p>RGB band compositions for C1, C2, and C3.</p>
Full article ">Figure 6
<p>LULC for C1, C2, and C3 with native vegetation in transition between Brazilian Cerrado and Atlantic Forest interface.</p>
Full article ">Figure 7
<p>Photographic record for each LRHB class.</p>
Full article ">Figure 8
<p>Producer and user accuracy values, omission, and inclusion errors for each class for C1, C2, and C3.</p>
Full article ">
22 pages, 33778 KiB  
Article
Synthetic Aperture Radar Monitoring of Snow in a Reindeer-Grazing Landscape
by Ida Carlsson, Gunhild Rosqvist, Jenny Marika Wennbom and Ian A. Brown
Remote Sens. 2024, 16(13), 2329; https://doi.org/10.3390/rs16132329 - 26 Jun 2024
Viewed by 814
Abstract
Snow cover and runoff play an important role in the Arctic environment, which is increasingly affected by climate change. Over the past 30 years, winter temperatures in northern Sweden have risen by 2 °C, accompanied by an increase in precipitation. This has led [...] Read more.
Snow cover and runoff play an important role in the Arctic environment, which is increasingly affected by climate change. Over the past 30 years, winter temperatures in northern Sweden have risen by 2 °C, accompanied by an increase in precipitation. This has led to a higher incidence of thaw–freeze and rain-on-snow events. Snow properties, such as the snow depth and longevity, and the timing of snowmelt in spring significantly impact the alpine tundra vegetation. The emergent vegetation at the edge of the snow patches during spring and summer constitutes an essential nutrient supply for reindeer. We have used Sentinel-1 synthetic aperture radar (SAR) to determine the onset of the surface melt and the end of the snow cover in the core reindeer grazing area of the Laevás Sámi reindeer-herding community in northern Sweden. Using SAR data from March to August during the period 2017 to 2021, the start of the surface melt is identified by detecting the season’s backscatter minimum. The end of the snow cover is determined using a threshold approach. A comparison between the results of the analysis of the end of the snow cover from Sentinel-1 and in situ measurements, for the years 2017 to 2020, derived from an automatic weather station located in Laevásvággi reveals a 2- to 10-day difference in the snow-free ground conditions, which indicates that the method can be used to investigate when the ground is free of snow. VH data are preferred to VV data due to the former’s lower sensitivity to temporary wetting events. The outcomes from the season backscatter minimum demonstrate a distinct 25-day difference in the start of the runoff between the 5 investigated years. The backscatter minimum and threshold-based method used here serves as a valuable complement to global snowmelt monitoring. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Modified illustration based on Buchelt et al. [<a href="#B36-remotesensing-16-02329" class="html-bibr">36</a>], describing the backscatter intensity during the snow melting season derived from S-1 SAR data. The start of the surface melt (SOSM) is marked by the S-1 backscatter reaching its minimum, while the end of the snowmelt (EOS) is indicated by the backscatter starting to reach a higher value after reaching the season’s minimum value [<a href="#B36-remotesensing-16-02329" class="html-bibr">36</a>].</p>
Full article ">Figure 2
<p>The area of interest for this study is the spring and summer grazing area used by reindeer of the Laevás Sámi reindeer-herding community, northern Sweden. The yellow circle marks where the automatic weather station (AWS) in Laevásvággi 18.96°E 68.04°N is located, and the area is also the calving ground for Laevás reindeers [<a href="#B42-remotesensing-16-02329" class="html-bibr">42</a>,<a href="#B44-remotesensing-16-02329" class="html-bibr">44</a>].</p>
Full article ">Figure 3
<p>The acquisitions of S-1 (blue) single-look complex data, in interferometric wide-swath mode, in ascending orbit from the Alaska Satellite Facility [<a href="#B45-remotesensing-16-02329" class="html-bibr">45</a>] downloaded on the 12th of September, 14th of December of 2022 and 6th of May 2024.</p>
Full article ">Figure 4
<p>Workflow for preprocessing the S-1 images from 2017–2021 using the Sentinel Application Platform (SNAP). This process involved obtaining the latest orbit file, splitting swaths, and debursting images, followed by merging them into a single coherent image. Calibration to the backscatter coefficient (<span class="html-italic">β</span><sup>0</sup>) was conducted according to Small (2011). Subsequent steps included multilooking, terrain flattening for pixel location rectification, and Lee Sigma speckle filtering [<a href="#B36-remotesensing-16-02329" class="html-bibr">36</a>] for noise removal. Radiometric terrain correction utilized the range-Doppler technique with a 2 m DEM [<a href="#B44-remotesensing-16-02329" class="html-bibr">44</a>].</p>
Full article ">Figure 5
<p>Simplification (the season backscatter is smoothed by the average backscattering during the season) of the seasonal backscatter in decibels from Sentinel-1 for all five years in both polarisations (<b>a</b>,<b>b</b>). VV polarisation (<b>a</b>) consistently exhibits higher backscatter values throughout the season compared to VH polarisation (<b>b</b>). VV polarisation is preferred for surface features and roughness, while VH polarisation is more suitable for detecting internal structure and volume scattering within targets like vegetation. Notably, in March 2017, there is a period characterized by lower backscatter values in the VV polarisation.</p>
Full article ">Figure 6
<p>Overview of monthly SOSM<sub><span class="html-italic">S</span>-1</sub> in the VV polarisation from 2017 to 2021 in the spring and summer grazing area of Laevás reindeer. In 2017 (<b>a</b>), a substantial SOSM<sub><span class="html-italic">S</span>-1</sub> was detected in the VV polarisation mode, covering 56% of the area in March. In 2018 (<b>b</b>), the SOSM<sub><span class="html-italic">S</span>-1</sub> started in April (red) and May (light yellow) and in May (light yellow) during 2019 (<b>c</b>). In 2020 (<b>d</b>), the SOSM<sub><span class="html-italic">S</span>-1</sub> started in May (light yellow) and June (light blue). During the year 2021 (<b>e</b>), there were large SOSM<sub><span class="html-italic">S</span>-1</sub> in May (light yellow) and in July (blue). These findings suggest that the SOSM<sub><span class="html-italic">S</span>-1</sub> varies significantly across the years, with the VV polarisation mode consistently exhibiting an earlier SOSM than the VH.</p>
Full article ">Figure 7
<p>Overview of the monthly SOSM<sub><span class="html-italic">S</span>-1</sub> in the VH polarisation over the observed years 2017 to 2021. In 2017 (<b>a</b>), the VH exhibited the largest snowmelt in May (light yellow) and June (light blue). In 2018 (<b>b</b>), a significant SOSM<sub><span class="html-italic">S</span>-1</sub> was displayed in April (red) and May (light yellow), as well in May (light yellow) in 2019 (<b>c</b>). Noteworthily, 2020 (<b>d</b>) exhibited pronounced SOSM<sub><span class="html-italic">S</span>-1</sub> peaks in May (light yellow) and June (light blue). In 2021 (<b>e</b>), the highest SOSM was recorded in May (light yellow). These findings highlight the variability in the seasonal snowmelt dynamics captured in the data.</p>
Full article ">Figure 8
<p>Monthly end of snowmelt (EOS<sub><span class="html-italic">S</span>-1</sub>) in the VV polarisation between the years 2017 and 2021 are shown in the figure. In the VV polarisation for 2017 (<b>a</b>), the deviation is evident, with the largest amount of EOS<sub><span class="html-italic">S</span>-1</sub> occurring in March, a pattern not observed in the VH polarisation (<a href="#remotesensing-16-02329-f009" class="html-fig">Figure 9</a>). The year 2018 (<b>b</b>) exhibits an early EOS<sub><span class="html-italic">S</span>-1</sub> in the VV polarisation, indicating bare ground in a significant portion of the area as early as May (light yellow). Moreover, 2019 (<b>c</b>) and 2020 (<b>d</b>) show similar EOS<sub><span class="html-italic">S</span>-1</sub> in June (light blue) and July (blue), while 2019 exhibits some earlier melting, particularly in April (red). In 2021 (<b>e</b>), the EOS<sub><span class="html-italic">S</span>-1</sub> occurrence was notable in May, with a more substantial presence observed in July. Moreover, there are areas within the region where data are not available, as evidenced across all the years.</p>
Full article ">Figure 9
<p>End of season (EOS) observations from 2017 to 2021, as depicted by the VH polarisation. The data reveal significant variations in the EOS percentages across different years and months, with certain trends standing out prominently. For instance, noticeable spikes in the snowmelt are observed in May and June across multiple years, indicating periods of accelerated melting. In 2017 (<b>a</b>), the EOS percentages remained consistently low throughout the observed months, with minimal snowmelt recorded with the largest EOS in June (light blue). In 2018 (<b>b</b>), snowmelt began to appear in April (red) and increased notably in May (light blue). In 2019 (<b>c</b>), the trend of increasing snowmelt continued into 2019, with May (light yellow) showcasing substantial melting percentages. Moreover, 2020 (<b>d</b>) witnessed a pronounced increase in snowmelt compared to previous years, particularly notable in May (light yellow) and June (light blue). In 2021 (<b>e</b>), the EOS percentages displayed a remarkable spike in May (light yellow), indicating a notably accelerated snowmelt compared to previous years.</p>
Full article ">
19 pages, 5541 KiB  
Article
Application of Normalized Radar Backscatter and Hyperspectral Data to Augment Rangeland Vegetation Fractional Classification
by Matthew Rigge, Brett Bunde, Kory Postma, Simon Oliver and Norman Mueller
Remote Sens. 2024, 16(13), 2315; https://doi.org/10.3390/rs16132315 (registering DOI) - 25 Jun 2024
Viewed by 1025
Abstract
Rangeland ecosystems in the western United States are vulnerable to climate change, fire, and anthropogenic disturbances, yet classification of rangeland areas remains difficult due to frequently sparse vegetation canopies that increase the influence of soils and senesced vegetation, the overall abundance of senesced [...] Read more.
Rangeland ecosystems in the western United States are vulnerable to climate change, fire, and anthropogenic disturbances, yet classification of rangeland areas remains difficult due to frequently sparse vegetation canopies that increase the influence of soils and senesced vegetation, the overall abundance of senesced vegetation, heterogeneity of life forms, and limited ground-based data. The Rangeland Condition Monitoring Assessment and Projection (RCMAP) project provides fractional vegetation cover maps across western North America using Landsat imagery and artificial intelligence from 1985 to 2023 at yearly time-steps. The objectives of this case study are to apply hyperspectral data from several new data streams, including Sentinel Synthetic Aperture Radar (SAR) and Earth Surface Mineral Dust Source Investigation (EMIT), to the RCMAP model. We run a series of five tests (Landsat-base model, base + SAR, base + EMIT, base + SAR + EMIT, and base + Landsat NEXT [LNEXT] synthesized from EMIT) over a difficult-to-classify region centered in southwest Montana, USA. Our testing results indicate a clear accuracy benefit of adding SAR and EMIT data to the RCMAP model, with a 7.5% and 29% relative increase in independent accuracy (R2), respectively. The ability of SAR data to observe vegetation height allows for more accurate classification of vegetation types, whereas EMIT’s continuous characterization of the spectral response boosts discriminatory power relative to multispectral data. Our spectral profile analysis reveals the enhanced classification power with EMIT is related to both the improved spectral resolution and representation of the entire domain as compared to legacy Landsat. One key finding is that legacy Landsat bands largely miss portions of the electromagnetic spectrum where separation among important rangeland targets exists, namely in the 900–1250 nm and 1500–1780 nm range. Synthesized LNEXT data include these gaps, but the reduced spectral resolution compared to EMIT results in an intermediate 18% increase in accuracy relative to the base run. Here, we show the promise of enhanced classification accuracy using EMIT data, and to a smaller extent, SAR. Full article
Show Figures

Figure 1

Figure 1
<p>Study area used to analyze impact of Synthetic Aperture Radar (SAR) and Earth Surface Mineral Dust Source Investigation (EMIT) hyperspectral data in rangeland vegetation classification. Base image is a 50th percentile composite of 2016 Landsat imagery. Inset map shows study location in the conterminous United States.</p>
Full article ">Figure 2
<p>Scatterplots of independent validation (n = 399) for selected component cover predictions by model run. Line of best fit and 1-to-1 line indicated by dashed blue and solid red line, respectively. For improved visualization, the x and y ranges vary by plot. SAR = Synthetic Aperture Radar; EMIT = Earth Surface Mineral Dust Source Investigation; LNEXT = Landsat NEXT.</p>
Full article ">Figure 3
<p>Component predictions by model for shrub (<b>top</b>) and herbaceous cover (<b>bottom</b>). White indicates either land cover or Earth Surface Mineral Dust Source Investigation (EMIT) masking. SAR = Synthetic Aperture Radar.</p>
Full article ">Figure 4
<p>Correlation (<span class="html-italic">r</span>) between Rangeland Condition Monitoring Assessment and Projection (RCMAP) high-resolution training data and Synthetic Aperture Radar (SAR) horizontal transmit and vertical receive (VH)/vertical transmit and vertical receive (VV) data at <span class="html-italic">n</span> = 60,000 pixels.</p>
Full article ">Figure 5
<p>Average spectral profiles of important rangeland targets (see Methods Section for classification details) based on high-resolution Rangeland Condition Monitoring Assessment and Projection (RCMAP) data. Line data reflect Earth Surface Mineral Dust Source Investigation (EMIT) profiles, and points of the same color represent Landsat.</p>
Full article ">Figure 6
<p>Separability of rangeland classes plotted in <a href="#remotesensing-16-02315-f005" class="html-fig">Figure 5</a> as measured by the standard deviation among spectral profiles across the classes at each Earth Surface Mineral Dust Source Investigation (EMIT) band. Width of Landsat band ranges (bands 2–7 used in Rangeland Condition Monitoring Assessment and Projection [RCMAP] analysis) are plotted in black.</p>
Full article ">Figure 7
<p>Average spectral profile from Earth Surface Mineral Dust Source Investigation (EMIT) by shrub 5% cover bins (color) in Rangeland Condition Monitoring Assessment and Projection (RCMAP) high-resolution training sites. Some bins are omitted for clarity. We removed pixels with &gt;0% tree cover and ≥20% herbaceous cover from this analysis. All species of shrub are included.</p>
Full article ">
30 pages, 12064 KiB  
Article
Inversion of Forest Aboveground Biomass in Regions with Complex Terrain Based on PolSAR Data and a Machine Learning Model: Radiometric Terrain Correction Assessment
by Yonghui Nie, Rula Sa, Sergey Chumachenko, Yifan Hu, Youzhu Wang and Wenyi Fan
Remote Sens. 2024, 16(12), 2229; https://doi.org/10.3390/rs16122229 - 19 Jun 2024
Viewed by 475
Abstract
The accurate estimation of forest aboveground biomass (AGB) in areas with complex terrain is very important for quantifying the carbon sequestration capacity of forest ecosystems and studying the regional or global carbon cycle. In our previous research, we proposed the radiometric terrain correction [...] Read more.
The accurate estimation of forest aboveground biomass (AGB) in areas with complex terrain is very important for quantifying the carbon sequestration capacity of forest ecosystems and studying the regional or global carbon cycle. In our previous research, we proposed the radiometric terrain correction (RTC) process for introducing normalized correction factors, which has strong effectiveness and robustness in terms of the backscattering coefficient of polarimetric synthetic aperture radar (PolSAR) data and the monadic model. However, the impact of RTC on the correctness of feature extraction and the performance of regression models requires further exploration in the retrieval of forest AGB based on a machine learning multiple regression model. In this study, based on PolSAR data provided by ALOS-2, 117 feature variables were accurately extracted using the RTC process, and then Boruta and recursive feature elimination with cross-validation (RFECV) algorithms were used to perform multi-step feature selection. Finally, 10 machine learning regression models and the Optuna algorithm were used to evaluate the effectiveness and robustness of RTC in improving the quality of the PolSAR feature set and the performance of the regression models. The results revealed that, compared with the situation without RTC treatment, RTC can effectively and robustly improve the accuracy of PolSAR features (the Pearson correlation R between the PolSAR features and measured forest AGB increased by 0.26 on average) and the performance of regression models (the coefficient of determination R2 increased by 0.14 on average, and the rRMSE decreased by 4.20% on average), but there is a certain degree of overcorrection in the RTC process. In addition, in situations where the data exhibit linear relationships, linear models remain a powerful and practical choice due to their efficient and stable characteristics. For example, the optimal regression model in this study is the Bayesian Ridge linear regression model (R2 = 0.82, rRMSE = 18.06%). Full article
(This article belongs to the Special Issue SAR for Forest Mapping III)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of study sites: (<b>a</b>) the location of Saihanba Forest Farm in relation to the provinces and counties in China; (<b>b</b>) the spatial location of ALOS-2 data relative to Weichang County; (<b>c</b>) the Pauli RGB image (R: |HH-VV|, G: |HV|, B: |HH + VV|) based on PolSAR data and the location of the measured samples; the basemap is the optical image of Tianditu.</p>
Full article ">Figure 2
<p>A flowchart of the proposed forest AGB mapping scheme.</p>
Full article ">Figure 3
<p>Absolute value of Pearson correlation coefficient (R) between forest AGB and the PolSAR features based on the data (25 July 2020) with radiometric terrain correction (RTC, olive) and non-RTC data (NRT, red). Sorted based on R_RTC (i.e., absolute value of R value between forest AGB and SAR features extracted based on RTC data). (<b>a</b>) The first set of the extracted original PolSAR features; (<b>b</b>) the second set of the extracted original PolSAR features (39 in total); (<b>c</b>) derived features based on PolSAR original features (39 in total).</p>
Full article ">Figure 4
<p>Taking PolSAR data from 25 July 2020 as an example, we created scatter density plots between the decibel values of the three components (Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl)) of the Freeman three-decomposition in different topographic correction stages (non-radiometric terrain correction (NRTC), polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC)) and the local incidence angle <span class="html-italic">θ<sub>loc</sub></span>. (<b>a</b>) NRTC_Vol; (<b>b</b>) POAC_Vol; (<b>c</b>) ESAC_Vol; (<b>d</b>) AVEC_Vol; (<b>e</b>) NRTC_Odd; (<b>f</b>) POAC_Odd; (<b>g</b>) ESAC_Odd; (<b>h</b>) AVEC_Odd; (<b>i</b>) NRTC_Dbl; (<b>j</b>) POAC_Dbl; (<b>k</b>) ESAC_Dbl; (<b>l</b>) AVEC_Dbl.</p>
Full article ">Figure 5
<p>Taking PolSAR data from 25 July 2020 as an example, we created a scatter density plot for each component of Freeman three-decomposition (FRE3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC was completed) with respect to non-RTC (NRTC). The three components of FRE3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure 5 Cont.
<p>Taking PolSAR data from 25 July 2020 as an example, we created a scatter density plot for each component of Freeman three-decomposition (FRE3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC was completed) with respect to non-RTC (NRTC). The three components of FRE3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure 6
<p>Analysis of the effectiveness of RTC and the optimal regression model of this study, taking the SAR data from 25 July 2020 as an example. (<b>a</b>) The training results of the NRTC and RTC data, where the black dots are the results of the corresponding single training; (<b>b</b>) scatter plot of the measured forest AGB and the AGB predicted by the optimal regression model (BysRidge); (<b>c</b>) spatial distribution map of forest AGB in the study area based on optimal model prediction.</p>
Full article ">Figure A1
<p>The scatter density plot of each component of Yamaguchi three-component (YAM3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC) with respect to non-RTC (NRTC). The three components of YAM3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure A1 Cont.
<p>The scatter density plot of each component of Yamaguchi three-component (YAM3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC) with respect to non-RTC (NRTC). The three components of YAM3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure A2
<p>The result of feature selection: (<b>a</b>) the 32 features selected in preliminary feature selection (Boruta algorithm) based on radiative terrain correction (RTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>b</b>) the 21 features selected in preliminary feature selection (Boruta algorithm) based on non-RTC (NRTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>c</b>) the number of features selected in the second step feature selection (RFECV algorithm) based on RTC and NRTC data; (<b>d</b>) the features selected in different multivariate linear models and the variance inflation factor (VIF) value corresponding to each feature; (<b>e</b>) the features selected in different non-parametric models.</p>
Full article ">Figure A2 Cont.
<p>The result of feature selection: (<b>a</b>) the 32 features selected in preliminary feature selection (Boruta algorithm) based on radiative terrain correction (RTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>b</b>) the 21 features selected in preliminary feature selection (Boruta algorithm) based on non-RTC (NRTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>c</b>) the number of features selected in the second step feature selection (RFECV algorithm) based on RTC and NRTC data; (<b>d</b>) the features selected in different multivariate linear models and the variance inflation factor (VIF) value corresponding to each feature; (<b>e</b>) the features selected in different non-parametric models.</p>
Full article ">Figure A3
<p>Scatter plot of measured forest AGB and predicted forest AGB. The prediction model is an optimal regression model based on the PolSAR data processed by radiometric terrain correction (RTC) from 25 July 2020. (<b>a</b>) The independent variable of the prediction model was derived from the PolSAR data (after RTC processing) from 11 July 2020. (<b>b</b>) The independent variable of the prediction model was derived from the PolSAR data (after RTC processing) from 8 August 2020.</p>
Full article ">
24 pages, 18033 KiB  
Article
Full-Scale Aggregated MobileUNet: An Improved U-Net Architecture for SAR Oil Spill Detection
by Yi-Ting Chen, Lena Chang and Jung-Hua Wang
Sensors 2024, 24(12), 3724; https://doi.org/10.3390/s24123724 - 7 Jun 2024
Viewed by 555
Abstract
Oil spills are a major threat to marine and coastal environments. Their unique radar backscatter intensity can be captured by synthetic aperture radar (SAR), resulting in dark regions in the images. However, many marine phenomena can lead to erroneous detections of oil spills. [...] Read more.
Oil spills are a major threat to marine and coastal environments. Their unique radar backscatter intensity can be captured by synthetic aperture radar (SAR), resulting in dark regions in the images. However, many marine phenomena can lead to erroneous detections of oil spills. In addition, SAR images of the ocean include multiple targets, such as sea surface, land, ships, and oil spills and their look-alikes. The training of a multi-category classifier will encounter significant challenges due to the inherent class imbalance. Addressing this issue requires extracting target features more effectively. In this study, a lightweight U-Net-based model, Full-Scale Aggregated MobileUNet (FA-MobileUNet), was proposed to improve the detection performance for oil spills using SAR images. First, a lightweight MobileNetv3 model was used as the backbone of the U-Net encoder for feature extraction. Next, atrous spatial pyramid pooling (ASPP) and a convolutional block attention module (CBAM) were used to improve the capacity of the network to extract multi-scale features and to increase the speed of module calculation. Finally, full-scale features from the encoder were aggregated to enhance the network’s competence in extracting features. The proposed modified network enhanced the extraction and integration of features at different scales to improve the accuracy of detecting diverse marine targets. The experimental results showed that the mean intersection over union (mIoU) of the proposed model reached more than 80% for the detection of five types of marine targets including sea surface, land, ships, and oil spills and their look-alikes. In addition, the IoU of the proposed model reached 75.85 and 72.67% for oil spill and look-alike detection, which was 18.94% and 25.55% higher than that of the original U-Net model, respectively. Compared with other segmentation models, the proposed network can more accurately classify the black regions in SAR images into oil spills and their look-alikes. Furthermore, the detection performance and computational efficiency of the proposed model were also validated against other semantic segmentation models. Full article
(This article belongs to the Special Issue Intelligent SAR Target Detection and Recognition)
Show Figures

Figure 1

Figure 1
<p>Samples of SAR oil spill images from the MKLab dataset. Cyan, red, brown, green, and black correspond to oil spills, look-alikes, ships, land, and sea surface, respectively. (<b>a</b>) SAR images. (<b>b</b>) RGB masks.</p>
Full article ">Figure 2
<p>The collected SAR images corresponding to oil spill events in the Mediterranean Sea. The sampling dates from left to right are 25 February 2021, 5 September 2021, and 5 September 2021. Cyan, red, brown, green, and black correspond to oil spills, look-alikes, ships, land, and sea surface, respectively. (<b>a</b>) SAR images. (<b>b</b>) RGB masks.</p>
Full article ">Figure 3
<p>The architecture of the proposed FA-MobileUNet model.</p>
Full article ">Figure 4
<p>The U-Net structure proposed by Ronneberger et al. [<a href="#B20-sensors-24-03724" class="html-bibr">20</a>].</p>
Full article ">Figure 5
<p>The block structure of MobileNetv3.</p>
Full article ">Figure 6
<p>The structure of the CBAM.</p>
Full article ">Figure 7
<p>The structure of the ASPP module.</p>
Full article ">Figure 8
<p>Full-scale aggregation example of stage 4 (44 × 44) of the decoder layer in <a href="#sensors-24-03724-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 9
<p>The training process of the proposed FA-MobileUNet model using augmented MKLab dataset. (<b>a</b>) Loss. (<b>b</b>) Accuracy.</p>
Full article ">Figure 10
<p>The segmentation results of the 55th image in the MKLab dataset: (<b>a</b>) original SAR image, (<b>b</b>) the corresponding ground truth data, and results from (<b>c</b>) U-Net model, (<b>d</b>) LinkNet model, (<b>e</b>) PSPNet model, (<b>f</b>) DeepLabv2 model, (<b>g</b>) DeepLabv3+ model, (<b>h</b>) FA-MobileUNet model. Black, cyan, red, brown, and green represent the sea surface, oil spills, look-alikes, ships, and land, respectively.</p>
Full article ">Figure 10 Cont.
<p>The segmentation results of the 55th image in the MKLab dataset: (<b>a</b>) original SAR image, (<b>b</b>) the corresponding ground truth data, and results from (<b>c</b>) U-Net model, (<b>d</b>) LinkNet model, (<b>e</b>) PSPNet model, (<b>f</b>) DeepLabv2 model, (<b>g</b>) DeepLabv3+ model, (<b>h</b>) FA-MobileUNet model. Black, cyan, red, brown, and green represent the sea surface, oil spills, look-alikes, ships, and land, respectively.</p>
Full article ">Figure 11
<p>The segmentation results of 71st image in the MKLab dataset: (<b>a</b>) original SAR image, (<b>b</b>) the corresponding ground truth data, and results from (<b>c</b>) U-Net model, (<b>d</b>) LinkNet model, (<b>e</b>) PSPNet model, (<b>f</b>) DeepLabv2 model, (<b>g</b>) DeepLabv3+ model, (<b>h</b>) FA-MobileUNet model. Black, cyan, red, brown, and green represent the sea surface, oil spills, look-alikes, ships, and land, respectively.</p>
Full article ">Figure 12
<p>The segmentation results of 106th image in the MKLab dataset: (<b>a</b>) original SAR image, (<b>b</b>) the corresponding ground truth data, and results from (<b>c</b>) U-Net model, (<b>d</b>) LinkNet model, (<b>e</b>) PSPNet model, (<b>f</b>) DeepLabv2 model, (<b>g</b>) DeepLabv3+ model, (<b>h</b>) FA-MobileUNet model. Black, red, brown, and green represent the sea surface, look-alikes, ships, and land, respectively.</p>
Full article ">Figure 13
<p>The segmentation results of U-Net model with different modules: (<b>a</b>) original SAR image, (<b>b</b>) the corresponding ground truth data, and results from (<b>c</b>) U-Net model, (<b>d</b>) U-Net model with CBAM, (<b>e</b>) U-Net model with ASPP module, (<b>f</b>) U-Net with FA module. Black, cyan, red and brown represent the sea surface, oil spills, look-alikes and ships, respectively.</p>
Full article ">Figure 13 Cont.
<p>The segmentation results of U-Net model with different modules: (<b>a</b>) original SAR image, (<b>b</b>) the corresponding ground truth data, and results from (<b>c</b>) U-Net model, (<b>d</b>) U-Net model with CBAM, (<b>e</b>) U-Net model with ASPP module, (<b>f</b>) U-Net with FA module. Black, cyan, red and brown represent the sea surface, oil spills, look-alikes and ships, respectively.</p>
Full article ">Figure 14
<p>The incorrect ground truth data in the dataset. A1 and A2 are the 111th and 275th training images, respectively. Black, cyan, red, brown, and green represent the sea surface, oil spills, look-alikes, ships, and land, respectively. (<b>a</b>) SAR image. (<b>b</b>) Ground truth data.</p>
Full article ">Figure 15
<p>Example of the revised ground truth data in the dataset. B1 and B2 are the 140th and 157th training images, respectively. Black, cyan, red, brown, and green represent the sea surface, oil spills, look-alikes, ships, and land, respectively. (<b>a</b>) SAR image. (<b>b</b>) Original ground truth data. (<b>c</b>) Revised ground truth data.</p>
Full article ">
21 pages, 4154 KiB  
Article
Using the AIEM and Radarsat-2 SAR to Retrieve Bare Surface Soil Moisture
by Chengshen Yin, Quanming Liu and Yin Zhang
Water 2024, 16(11), 1617; https://doi.org/10.3390/w16111617 - 5 Jun 2024
Viewed by 871
Abstract
Taking the Jiefangzha irrigation area of the Inner Mongolia Autonomous Region as the research area, the response relationships between the backscattering coefficient and radar frequency, radar incidence angle, root-mean-square height, correlation length, and soil water content under different conditions were simulated using advanced [...] Read more.
Taking the Jiefangzha irrigation area of the Inner Mongolia Autonomous Region as the research area, the response relationships between the backscattering coefficient and radar frequency, radar incidence angle, root-mean-square height, correlation length, and soil water content under different conditions were simulated using advanced integral equations. The backscattering characteristics of exposed surfaces in cold and dry irrigation areas were discussed, and the reasons for the different effects were analyzed. Based on this, surface roughness models and statistical regression moisture inversion models were constructed through co-polarized backscatter coefficients and combined surface roughness. The correlation between the inverted surface roughness values and the measured values was R2 = 0.7569. The correlation between the soil moisture simulation values and the measured values was R2 = 0.8501, with an RMSE of 0.04. The findings showed a strong correlation between the values from the regression simulation and the measured data, indicating that the model can be applied to soil moisture inversion and has a good inversion accuracy. Compared with previous studies in the same area, the inversion model proposed in this paper has a higher accuracy and is more suitable for the inversion of soil moisture in the Jiefangzha irrigation area. These findings can support research on the water cycle and water environment assessment in the region. Full article
(This article belongs to the Special Issue Research on Soil Moisture and Irrigation)
Show Figures

Figure 1

Figure 1
<p>Spatial location of the study area.</p>
Full article ">Figure 2
<p>Frequency response: backscattering coefficient response graph.</p>
Full article ">Figure 3
<p>Response of the backscattering coefficient to the angle change.</p>
Full article ">Figure 4
<p>Response of the backscattering coefficient when the root-mean-square height changes.</p>
Full article ">Figure 5
<p>Reaction of the backscattering coefficient to variations in the correlation length.</p>
Full article ">Figure 6
<p>Response of the backscattering coefficient to changes in soil moisture.</p>
Full article ">Figure 7
<p>Backscattering coefficient HH comparison.</p>
Full article ">Figure 8
<p>Backscattering coefficient VV comparison.</p>
Full article ">Figure 9
<p>Combined roughness (<span class="html-italic">Z<sub>S</sub></span>) inversion.</p>
Full article ">Figure 10
<p>Correlation between the inversion value of the empirical model and the measured values of soil roughness.</p>
Full article ">Figure 11
<p>Correlation between water content inversion values and measured values.</p>
Full article ">Figure 12
<p>Soil water content classification chart.</p>
Full article ">
Back to TopTop