[go: up one dir, main page]

Next Issue
Volume 13, March-1
Previous Issue
Volume 13, February-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 13, Issue 4 (February-2 2021) – 292 articles

Cover Story (view full-size image): The COVID-19 pandemic has impacted polar research in many ways since the start of 2020, including cancellation of field campaigns, cancellation and/or postponement of important conferences, workshops, and training courses, delays in delivery of scientific outputs because of shutdown of campuses, cancellations and/or delay in funding and many more. Further, field campaigns to Svalbard are expected to remain severely affected in 2021. In response to the changing situation, SIOS initiated several operational activities suitable to mitigate new challenges resulting from the pandemic. The paper provides an extensive overview of EO, RS and other operational activities developed in response to COVID-19. It is probably the first attempt to highlight the role of EO and RS in mitigating the damage in terms of possible data gaps in long time data series of scientific observations in one of the most remote places [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
30 pages, 5039 KiB  
Article
Monitoring the Efficacy of Crested Floatingheart (Nymphoides cristata) Management with Object-Based Image Analysis of UAS Imagery
by Adam R. Benjamin, Amr Abd-Elrahman, Lyn A. Gettys, Hartwig H. Hochmair and Kyle Thayer
Remote Sens. 2021, 13(4), 830; https://doi.org/10.3390/rs13040830 - 23 Feb 2021
Cited by 5 | Viewed by 3275
Abstract
This study investigates the use of unmanned aerial systems (UAS) mapping for monitoring the efficacy of invasive aquatic vegetation (AV) management on a floating-leaved AV species, Nymphoides cristata (CFH). The study site consists of 48 treatment plots (TPs). Based on six unique flights [...] Read more.
This study investigates the use of unmanned aerial systems (UAS) mapping for monitoring the efficacy of invasive aquatic vegetation (AV) management on a floating-leaved AV species, Nymphoides cristata (CFH). The study site consists of 48 treatment plots (TPs). Based on six unique flights over two days at three different flight altitudes while using both a multispectral and RGB sensor, accuracy assessment of the final object-based image analysis (OBIA)-derived classified images yielded overall accuracies ranging from 89.6% to 95.4%. The multispectral sensor was significantly more accurate than the RGB sensor at measuring CFH areal coverage within each TP only with the highest multispectral, spatial resolution (2.7 cm/pix at 40 m altitude). When measuring response in the AV community area between the day of treatment and two weeks after treatment, there was no significant difference between the temporal area change from the reference datasets and the area changes derived from either the RGB or multispectral sensor. Thus, water resource managers need to weigh small gains in accuracy from using multispectral sensors against other operational considerations such as the additional processing time due to increased file sizes, higher financial costs for equipment procurements, and longer flight durations in the field when operating multispectral sensors. Full article
(This article belongs to the Special Issue Remote Sensing in Aquatic Vegetation Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area (blue polygon) has project control points (PCPs) for the real-time kinematic (RTK) Global Navigation Satellite Systems (GNSS) base station (orange diamond) and RTK GNSS survey verification (blue circles); 3D ground control (red triangles) for spatial accuracy evaluation; 3D ground control (green crosses) for georeferencing the imagery; and approximate camera exposure locations (purple squares) for the Sony 3-band RGB camera showing the typical flight acquisition grid pattern.</p>
Full article ">Figure 2
<p>Distribution of 48 treatment plots (TPs) across the north and south test ponds in Stormwater Treatment Area 1 West (STA-1W). The untreated reference TPs are in TP2, TP14, TP33, and TP37.</p>
Full article ">Figure 3
<p>Typical land cover of test ponds both before and after treatment plot barrier installation. (<b>a</b>) Field site pre-construction of north pond treatment plots: facing south with low water. (<b>b</b>) Field site post-construction of south pond treatment plots: facing north with high water.</p>
Full article ">Figure 4
<p>(<b>a</b>) A dual-frequency RTK GNSS rover receiver setup over a typical 60 cm square aerial target with the dual-frequency RTK GNSS base station receiver shown both in the background and the inset (<b>b</b>).</p>
Full article ">Figure 5
<p>Comparison of crested floatingheart (CFH) vegetation communities at two different spatial resolutions for TP3 on 00AT. The red polygons were digitized on the Sony orthomosaic (1.7 cm/pix GSD) (<b>left</b>) through image interpretation of the orthomosaic and use of the Sony plot-level image (0.2 cm/pix GSD) (<b>right</b>) as a high-resolution, visual reference.</p>
Full article ">Figure 6
<p>Final classified image results for the initial day of treatment (00AT) for both the RGB Sony sensor and the multispectral RedEdge sensor. The four classes are crested floatingheart (CFH), emergent aquatic vegetation (EAV), submersed aquatic vegetation and water (SAV), and plastic sheeting (OTHER).</p>
Full article ">Figure 7
<p>Final classified image results for 14 days after treatment (14AT) for both the RGB Sony sensor and the multispectral RedEdge sensor.</p>
Full article ">Figure 8
<p>CFH community coverage area for 48 TPs across all six trials with a reference dataset for comparison on 00AT and 14AT.</p>
Full article ">Figure 9
<p>Class area for 48 TPs across all six trials organized by date of assessment.</p>
Full article ">Figure 10
<p>Difference in CFH area for 48 TPs between classified image (CI) area and reference polygon (RP) area across all six trials organized by date of assessment.</p>
Full article ">Figure 11
<p>Difference in CFH area for 48 TPs between paired datasets pre-treatment and post-treatment.</p>
Full article ">Figure 12
<p>Comparison of Normalized Difference Vegetation Index (NDVI) maps (<b>left</b>) with reference imagery (<b>right</b>) for TP13 on 00AT (<b>top</b>) and 14AT (<b>bottom</b>).</p>
Full article ">Figure 13
<p>Comparison of NDVI map (<b>left</b>) with reference plot-level imagery (<b>right</b>) for dead AV in TP23 on 14AT.</p>
Full article ">Figure 14
<p>Difference in CFH area between paired datasets pre-treatment and post-treatment organized by the 12 herbicide treatments.</p>
Full article ">
16 pages, 6822 KiB  
Article
Tracking the Evolution of Riverbed Morphology on the Basis of UAV Photogrammetry
by Teresa Gracchi, Guglielmo Rossi, Carlo Tacconi Stefanelli, Luca Tanteri, Rolando Pozzani and Sandro Moretti
Remote Sens. 2021, 13(4), 829; https://doi.org/10.3390/rs13040829 - 23 Feb 2021
Cited by 12 | Viewed by 4275
Abstract
Unmanned aerial vehicle (UAV) photogrammetry has recently become a widespread technique to investigate and monitor the evolution of different types of natural processes. Fluvial geomorphology is one of such fields of application where UAV potentially assumes a key role, since it allows for [...] Read more.
Unmanned aerial vehicle (UAV) photogrammetry has recently become a widespread technique to investigate and monitor the evolution of different types of natural processes. Fluvial geomorphology is one of such fields of application where UAV potentially assumes a key role, since it allows for overcoming the intrinsic limits of satellite and airborne-based optical imagery on one side, and in situ traditional investigations on the other. The main purpose of this paper was to obtain extensive products (digital terrain models (DTMs), orthophotos, and 3D models) in a short time, with low costs and at a high resolution, in order to verify the capability of this technique to analyze the active geomorphic processes on a 12 km long stretch of the French–Italian Roia River at both large and small scales. Two surveys, one year apart from each other, were carried out over the study area and a change detection analysis was performed on the basis of the comparison of the obtained DTMs to point out and characterize both the possible morphologic variations related to fluvial dynamics and modifications in vegetation coverage. The results highlight how the understanding of different fluvial processes may be improved by appropriately exploiting UAV-based products, which can thus represent a low-cost and non-invasive tool to crucially support decisionmakers involved in land management practices. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Digital Terrain Modeling)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The final stretch of the Roia River, flowing into the sea at the city of Ventimiglia. In orange is the area covered by the unmanned aerial vehicle (UAV) surveys, including the final stretch of the Bevera Torrent. (Image from Apple Maps.)</p>
Full article ">Figure 2
<p>The Saturn Mini drone.</p>
Full article ">Figure 3
<p>Example of two markers used for the GPS coordinates of the ground control points (GCPs).</p>
Full article ">Figure 4
<p>Products obtained by the 2017 survey. (<b>A</b>) Orthophoto; (<b>B</b>) classified dense cloud. In green, light blue, black, brown, and red are colored vegetation, deep water, riverbed, streets, and buildings, respectively. (<b>C</b>) Digital surface model (DSM).</p>
Full article ">Figure 5
<p>Products obtained by the 2018 survey. (<b>A</b>) Orthophoto; (<b>B</b>) classified dense cloud. In green, light blue, black, brown, and red are colored vegetation, deep water, riverbed, streets, and buildings, respectively. (<b>C</b>) Digital surface model (DSM).</p>
Full article ">Figure 6
<p>(<b>A</b>,<b>B</b>) 3D point cloud of a bridge crossing the Bevera Torrent acquired in 2017 (<b>A</b>) and 2018 (<b>B</b>); (<b>C</b>,<b>D</b>) 3D point cloud of a bridge crossing the Roia River acquired in 2017 (<b>A</b>) and 2018 (<b>B</b>).</p>
Full article ">Figure 7
<p>Distribution of the GCPs: location and error. (<b>A</b>) UAV survey carried out in 2017; 1 and 2 highlights areas with low concentration of GCPs. (<b>B</b>) UAV survey carried out in 2018.</p>
Full article ">Figure 8
<p>Differences between the digital terrain models (DTMs) obtained in 2018 and 2017. Details of the areas in the red frames are presented in <a href="#remotesensing-13-00829-f009" class="html-fig">Figure 9</a>. At this stage, vegetation is not considered.</p>
Full article ">Figure 9
<p>Some details of the differences between DTMs. Letters A−F specify the area and refer to <a href="#remotesensing-13-00829-f008" class="html-fig">Figure 8</a>. The second and the third columns show the 2017 and 2018 orthophotos, respectively. The blue polygon in the 2017 orthophoto highlights the retreat of the riverbank of about 4 m.</p>
Full article ">Figure 10
<p>Section AA’ of the examined case A (<a href="#remotesensing-13-00829-f009" class="html-fig">Figure 9</a>) traced on the DTMs. The representation allows for a clearer evidence of the riverbank retreat (on the left side). The central part shows the overlapping of the two DTMs; difference values are part of the error (±20 cm).</p>
Full article ">Figure 11
<p>Temporal reconstruction of the channel in <a href="#remotesensing-13-00829-f009" class="html-fig">Figure 9</a>A. (Images from Google Earth.)</p>
Full article ">Figure 12
<p>Vegetation height changes between 2017 and 2018 surveys.</p>
Full article ">
19 pages, 37891 KiB  
Article
Factors Influencing the Accuracy of Shallow Snow Depth Measured Using UAV-Based Photogrammetry
by Sangku Lee, Jeongha Park, Eunsoo Choi and Dongkyun Kim
Remote Sens. 2021, 13(4), 828; https://doi.org/10.3390/rs13040828 - 23 Feb 2021
Cited by 7 | Viewed by 3364
Abstract
Factors influencing the accuracy of UAV-photogrammetry-based snow depth distribution maps were investigated. First, UAV-based surveys were performed on the 0.04 km2 snow-covered study site in South Korea for 37 times over the period of 13 days under 16 prescribed conditions composed of [...] Read more.
Factors influencing the accuracy of UAV-photogrammetry-based snow depth distribution maps were investigated. First, UAV-based surveys were performed on the 0.04 km2 snow-covered study site in South Korea for 37 times over the period of 13 days under 16 prescribed conditions composed of various photographing times, flight altitudes, and photograph overlap ratios. Then, multi-temporal Digital Surface Models (DSMs) of the study area covered with shallow snow were obtained using digital photogrammetric techniques. Next, the multi-temporal snow depth distribution maps were created by subtracting the snow-free DSM from the multi-temporal DSMs of the study area. Then, snow depth in these UAV-Photogrammetry-based snow maps were compared to the in situ measurements at 21 locations. The accuracy of each of the multi-temporal snow maps were quantified in terms of bias (median of residuals, QΔD) and precision (the Normalized Median Absolute Deviation, NMAD). Lastly, various factors influencing these performance metrics were investigated. The results are as follows: (1) the QΔD and NMAD of the eight surveys performed at the optimal condition (50 m flight altitude and 80% overlap ratio) ranged from −2.30 cm to 5.90 cm and from 1.78 cm to 4.89 cm, respectively. The best survey case had −2.30 cm of QΔD and 1.78 cm of NMAD; (2) Lower UAV flight altitude and greater photograph overlap lower the NMAD and QΔD; (3) Greater number of Ground Control Points (GCPs) lowers the NMAD and QΔD; (4) Spatial configuration and accuracy of GCP coordinates influenced the accuracy of the snow depth distribution map; (5) Greater number of tie-points leads to higher accuracy; (6) Smooth fresh snow cover did not provide many tie-points, either resulting in a significant error or making the entire photogrammetry process impossible. Full article
(This article belongs to the Special Issue Measurement of Hydrologic Variables with Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area: (<b>a</b>) Location of Daegwallyeong-Myeon; (<b>b</b>) Location of Automatic Weather Station (AWS) and study area; (<b>c</b>) Aerial photo of the study site. Yellow stars and X marks represent GCPs and check points on the ground surface, respectively. Green circles and red triangles represent GCPs and the in situ snow depth measurement location on the snow surface, respectively.</p>
Full article ">Figure 2
<p>Accumulated snow depth measured at the Daegwallyeong AWS station, which is 2 km away from the study site. Times of survey are shown as vertical lines. In situ measurements for each surveying day are expressed as box plots.</p>
Full article ">Figure 3
<p>Various GCPs used in this study: (<b>a</b>) A manhole; (<b>b</b>) Lines on the road; (<b>c</b>) A black planar object on the snow surface. These photographs were taken on the 4 February.</p>
Full article ">Figure 4
<p>Box plot of (<b>a</b>) horizontal and (<b>b</b>) vertical uncertainties of 17 GCP measurements. Uncertainties of GCP on the snowed surface (4, 6, and 10 February) and on the original ground surface (21 February) were tested.</p>
Full article ">Figure 5
<p>Flight path and location of photo according to each method: (<b>a</b>) 50 m, 80%; (<b>b</b>) 100 m, 80%; (<b>c</b>) 50 m, 70%; (<b>d</b>) 100 m, 90%, respectively. (<b>e</b>) Three-dimensional view of the study area viewed from the take-off site.</p>
Full article ">Figure 6
<p>(<b>a</b>) Ortho+DSM of snow surface (<b>b</b>) Ortho+DSM of the ground surface. (<b>c</b>) The snow depth map that was obtained by subtracting the DSM of ground from that of the snow surface (4 February 2020, 9 a.m.).</p>
Full article ">Figure 7
<p>Observed (x) versus estimated snow depth from the UAV photogrammetry (y) for 4, 6, and 10 February 2020.</p>
Full article ">Figure 8
<p>Box plot of (<b>a</b>) Median of residuals and (<b>b</b>) NMAD of the photogrammetry result varying with different UAV flight altitudes and photograph overlap.</p>
Full article ">Figure 9
<p>(<b>a</b>) Relationship between the number of tie points (x) and <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>D</mi> </mrow> </msub> </mrow> </semantics></math> of estimated snow depths (y). (<b>b</b>) Relationship between number of tie points (x) and NMAD of the estimated snow depths (y).</p>
Full article ">Figure 10
<p>(<b>a</b>) NMAD and (<b>b</b>) Median of the estimated snow depth varying with the number of GCPs used for photogrammetry.</p>
Full article ">Figure 11
<p>(<b>a</b>) Map of the error between the estimated snow depth DSM and the measurement. (<b>b</b>) Map of the check point error. (<b>c</b>) Kernel density of spatially distributed GCP points. (<b>d</b>) Map of the error snow depth after CP correction.</p>
Full article ">Figure 12
<p>Observed (x) versus estimated snow depth from the UAV photogrammetry (y) classified by surveying time of (<b>a</b>) 9 a.m., (<b>b</b>) 11 a.m., (<b>c</b>) 1 p.m., and (<b>d</b>) 3 p.m..</p>
Full article ">Figure 13
<p>(<b>a</b>) Orthophoto of a part of the study site with large shadows, (<b>b</b>) the ground surface, and (<b>c</b>) the estimated snow depth map of the same area.</p>
Full article ">Figure 14
<p>The location of the tie points that were identified by the program that were used in this study for the date of (<b>a</b>,<b>c</b>) 29 January 2020 (9 p.m.) and (<b>b</b>,<b>d</b>) 10 February 2020 (3 p.m.). (<b>e</b>) Gray histogram of photo taken on 29 January and 10 February at the same position, respectively. Black area in (<b>a</b>) and (<b>b</b>) represents the area where photographs could not be tied due to the lack of tie points.</p>
Full article ">Figure 15
<p>(<b>a</b>) Comparison original image (above) and CLAHE image (below). (<b>b</b>) Gray scale histogram: image of 10 February (bar) and 29 January (line) at the same position, respectively. (<b>c</b>) Constructed DSM using CLAHE filtered images.</p>
Full article ">
19 pages, 5086 KiB  
Article
Hyperspectral Image Destriping and Denoising Using Stripe and Spectral Low-Rank Matrix Recovery and Global Spatial-Spectral Total Variation
by Fang Yang, Xin Chen and Li Chai
Remote Sens. 2021, 13(4), 827; https://doi.org/10.3390/rs13040827 - 23 Feb 2021
Cited by 15 | Viewed by 3702
Abstract
Hyperspectral image (HSI) is easily corrupted by different kinds of noise, such as stripes, dead pixels, impulse noise, Gaussian noise, etc. Due to less consideration of the structural specificity of stripes, many existing HSI denoising methods cannot effectively remove the heavy stripes in [...] Read more.
Hyperspectral image (HSI) is easily corrupted by different kinds of noise, such as stripes, dead pixels, impulse noise, Gaussian noise, etc. Due to less consideration of the structural specificity of stripes, many existing HSI denoising methods cannot effectively remove the heavy stripes in mixed noise. In this paper, we classify the noise on HSI into three types: sparse noise, stripe noise, and Gaussian noise. The clean image and different types of noise are treated as independent components. In this way, the image denoising task can be naturally regarded as an image decomposition problem. Thanks to the structural characteristic of stripes and the low-rank property of HSI, we propose to destripe and denoise the HSI by using stripe and spectral low-rank matrix recovery and combine it with the global spatial-spectral TV regularization (SSLR-SSTV). By considering different properties of different HSI ingredients, the proposed method separates the original image from the noise components perfectly. Both simulation and real image denoising experiments demonstrate that the proposed method can achieve a satisfactory denoising result compared with the state-of-the-art methods. Especially, it outperforms the other methods in the task of stripe noise removal visually and quantitatively. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Low-rank property of hyperspectral image (HSI) and stripe component.</p>
Full article ">Figure 2
<p>The image decomposition result of our method in Pavia City Center image (band 22). The data are corrupted by the noise simulated in case 1 with <math display="inline"><semantics> <mi>σ</mi> </semantics></math> = 0.05 and <span class="html-italic">o</span> = 0.1, <span class="html-italic">r</span> = 0.3, and <span class="html-italic">v</span> = 0.075; (<b>a</b>) observed image <math display="inline"><semantics> <mi mathvariant="bold">Y</mi> </semantics></math>, (<b>b</b>) denoised image <math display="inline"><semantics> <mi mathvariant="bold">X</mi> </semantics></math>, (<b>c</b>) the output sparse noise <math display="inline"><semantics> <mi mathvariant="bold">B</mi> </semantics></math>, and (<b>d</b>) the output stripe noise <math display="inline"><semantics> <mi mathvariant="bold">S</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>The Pavia City Center image (11th band) (top) and zoom-in image (bottom) before and after denoising in case 1 with <math display="inline"><semantics> <mi>σ</mi> </semantics></math> = 0.05 and <span class="html-italic">o</span> = 0.1, <span class="html-italic">r</span> = 0.5, <span class="html-italic">v</span> = 0.075. (<b>a</b>) Original image; (<b>b</b>) noise image; and the image denoising results of (<b>c</b>) BM4D, (<b>d</b>) NAILRMA, (<b>e</b>) LRMR, (<b>f</b>) LLRSSTV, (<b>g</b>) LRTV, (<b>h</b>) LRTF-DFR, and (<b>i</b>) SSLR-SSTV (proposed method).</p>
Full article ">Figure 4
<p>The Washington DC Mall image (3th band) (top) and zoom-in image (bottom) before and after denoising in case 1 with <math display="inline"><semantics> <mi>σ</mi> </semantics></math> = 0.05 and <span class="html-italic">o</span> = 0.1, <span class="html-italic">r</span> = 0.5, <span class="html-italic">v</span> = 0.075. (<b>a</b>) Original image; (<b>b</b>) noise image; and the image denoising results of (<b>c</b>) BM4D, (<b>d</b>) NAILRMA, (<b>e</b>) LRMR, (<b>f</b>) LLRSSTV, (<b>g</b>) LRTV, (<b>h</b>) LRTF-DFR, and (<b>i</b>) SSLR-SSTV (proposed method).</p>
Full article ">Figure 5
<p>The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values of each band in the Pavia City Center images (<b>a</b>,<b>b</b>) and Washington DC Mall (<b>c</b>,<b>d</b>) with case 1: <math display="inline"><semantics> <mi>σ</mi> </semantics></math> = 0.05 and <span class="html-italic">o</span> = 0.1, <span class="html-italic">r</span> = 0.5, <span class="html-italic">v</span> = 0.075.</p>
Full article ">Figure 6
<p>Comparison of the destriping results in Pavia City (11th band) under two different noise levels. (<b>a</b>) Original image. (<b>b</b>) Noise image <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>v</mi> <mo>=</mo> <mn>0.075</mn> </mrow> </semantics></math>. (<b>c</b>) LRID. (<b>d</b>) Our method. (<b>e</b>) Noise image with <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>v</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>f</b>) LRID. (<b>g</b>) SSLR-SSTV (proposed method).</p>
Full article ">Figure 7
<p>Comparison of the destriping results in Washington DC Mall (3th band) under two different noise level. (<b>a</b>) Original image. (<b>b</b>) Noise image <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>v</mi> <mo>=</mo> <mn>0.075</mn> </mrow> </semantics></math>. (<b>c</b>) LRID. (<b>d</b>) Our method. (<b>e</b>) Noise image with <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mi>v</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>f</b>) LRID. (<b>g</b>) SSLR-SSTV (proposed method).</p>
Full article ">Figure 8
<p>Band 132 of the EO Hyperion dataset before and after denoising via the different methods: (<b>a</b>) Original image, (<b>b</b>) BM4D, (<b>c</b>) NAILRMA, (<b>d</b>) LRMR, (<b>e</b>) LLRSSTV, (<b>f</b>) LRTV, (<b>g</b>) LRTF-DFR, and (<b>h</b>) SSLR-SSTV (proposed method).</p>
Full article ">Figure 9
<p>Band 206 of the EHYDICE Urban dataset before and after denoising via the different methods: (<b>a</b>) Original image, (<b>b</b>) BM4D, (<b>c</b>) NAILRMA, (<b>d</b>) LRMR, (<b>e</b>) LLRSSTV, (<b>f</b>) LRTV, (<b>g</b>) LRTF-DFR, and (<b>h</b>) SSLR-SSTV (proposed method).</p>
Full article ">Figure 10
<p>MPSNR values of SSLR-SSTV (proposed method) for Pavia City Center image (top) and Washington DC Mall (bottom) by varying parameters <math display="inline"><semantics> <mi>τ</mi> </semantics></math>, <math display="inline"><semantics> <mi>λ</mi> </semantics></math>, and <math display="inline"><semantics> <mi>β</mi> </semantics></math>. The data were corrupted by the noise simulated in case 1 and case 2 with <math display="inline"><semantics> <mi>σ</mi> </semantics></math> = 0.05 and <span class="html-italic">o</span> = 0.1: (<b>a</b>,<b>f</b>) <span class="html-italic">r</span> = 0.3, <span class="html-italic">v</span> = 0.075; (<b>b</b>,<b>g</b>) <span class="html-italic">r</span> = 0.5, <span class="html-italic">v</span> = 0.075; (<b>c</b>,<b>h</b>) <span class="html-italic">r</span> = 0.7, <span class="html-italic">v</span> = 0.075; (<b>d</b>,<b>i</b>) <span class="html-italic">r</span> = 0.3, <span class="html-italic">v</span> = 0.05; (<b>e</b>,<b>j</b>) <span class="html-italic">r</span> = 0.3, <span class="html-italic">v</span> = 0.1.</p>
Full article ">Figure 11
<p>MPSNR values of SSLR-SSTV (proposed method) for Pavia City Center image (top) and Washington DC Mall image (bottom) by varying parameters <math display="inline"><semantics> <msub> <mi>r</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>r</mi> <mn>2</mn> </msub> </semantics></math>. The data were corrupted by the noise simulated in case 1 and case 2 with <math display="inline"><semantics> <mi>σ</mi> </semantics></math> = 0.05 and <span class="html-italic">o</span> = 0.1 (<b>a</b>,<b>f</b>) <span class="html-italic">r</span> = 0.3, <span class="html-italic">v</span> = 0.075; (<b>b</b>,<b>g</b>) <span class="html-italic">r</span> = 0.5, <span class="html-italic">v</span> = 0.075; (<b>c</b>,<b>h</b>) <span class="html-italic">r</span> = 0.7, <span class="html-italic">v</span> = 0.075; (<b>d</b>,<b>i</b>) <span class="html-italic">r</span> = 0.3, <span class="html-italic">v</span> = 0.05; (<b>e</b>,<b>j</b>) <span class="html-italic">r</span> = 0.3, <span class="html-italic">v</span> = 0.1.</p>
Full article ">
18 pages, 24376 KiB  
Article
Assessing Near Real-Time Satellite Precipitation Products for Flood Simulations at Sub-Daily Scales in a Sparsely Gauged Watershed in Peruvian Andes
by Harold Llauca, Waldo Lavado-Casimiro, Karen León, Juan Jimenez, Kevin Traverso and Pedro Rau
Remote Sens. 2021, 13(4), 826; https://doi.org/10.3390/rs13040826 - 23 Feb 2021
Cited by 30 | Viewed by 5263
Abstract
This study investigates the applicability of Satellite Precipitation Products (SPPs) in near real-time for the simulation of sub-daily runoff in the Vilcanota River basin, located in the southeastern Andes of Peru. The data from rain gauge stations are used to evaluate the quality [...] Read more.
This study investigates the applicability of Satellite Precipitation Products (SPPs) in near real-time for the simulation of sub-daily runoff in the Vilcanota River basin, located in the southeastern Andes of Peru. The data from rain gauge stations are used to evaluate the quality of Integrated Multi-satellite Retrievals for GPM–Early (IMERG-E), Global Satellite Mapping of Precipitation–Near Real-Time (GSMaP-NRT), Climate Prediction Center Morphing Method (CMORPH), and HydroEstimator (HE) at the pixel-station level; and these SPPs are used as meteorological inputs for the hourly hydrological modeling. The GR4H model is calibrated with the hydrometric station of the longest record, and model simulations are also verified at one station upstream and two stations downstream of the calibration point. Comparing the sub-daily precipitation data observed, the results show that the IMERG-E product generally presents higher quality, followed by GSMaP-NRT, CMORPH, and HE. Although the SPPs present positive and negative biases, ranging from mild to moderate, they do represent the diurnal and seasonal variability of the hourly precipitation in the study area. In terms of the average of Kling-Gupta metric (KGE), the GR4H_GSMaP-NRT’ yielded the best representation of hourly discharges (0.686), followed by GR4H_IMERG-E’ (0.623), GR4H_Ensemble-Mean (0.617) and GR4H_CMORPH’ (0.606), and GR4H_HE’ (0.516). Finally, the SPPs showed a high potential for monitoring floods in the Vilcanota basin in near real-time at the operational level. The results obtained in this research are very useful for implementing flood early warning systems in the Vilcanota basin and will allow the monitoring and short-term hydrological forecasting of floods by the Peruvian National Weather and Hydrological Service. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the Vilcanota river basin at INT gauge-station. Pluviometric and hydrometric network at the study domain.</p>
Full article ">Figure 2
<p>The structure of the GR4H model, taken from [<a href="#B36-remotesensing-13-00826" class="html-bibr">36</a>].</p>
Full article ">Figure 3
<p>Seasonal variability of the (<b>a</b>) relative error (RE), (<b>b</b>) coefficient of correlation (R), (<b>c</b>) root mean square error (RMSE), and (<b>d</b>) mean absolute error (MAE) metrics calculated using hourly data for all pluviometric stations in each monthly time-window.</p>
Full article ">Figure 4
<p>Correlation between the diurnal cycle of rainfall from 15 rain-gauge stations and estimates from (<b>a</b>) Integrated Multi-satellite Retrievals for GPM–Early (IMERG-E), (<b>b</b>) Global Satellite Mapping of Precipitation–Near Real-Time (GSMaP-NRT), (<b>c</b>) Climate Prediction Center Morphing Method (CMOPRH), and (<b>d</b>) HydroEstimator (HE); considering only rainy days upper than 15 mm/day from the wet period (November to April).</p>
Full article ">Figure 5
<p>Observed and simulated hourly discharges at PIS gauge-station during (<b>a</b>–<b>e</b>) calibration and (<b>f</b>–<b>j</b>) validation periods, using four SPPs with bias correction and an ensemble mean streamflow scenario.</p>
Full article ">Figure 6
<p>(<b>a</b>) KGE, (<b>b</b>) MARE, (<b>c</b>) PBIAS, and (<b>d</b>) RMSE values for evaluating the hydrological performance of the GR4H model at PIS gauge-station during calibration, validation, and total period; using four SPPs with bias correction and an ensemble mean streamflow scenario.</p>
Full article ">Figure 7
<p>Scatter plot of (<b>a</b>–<b>e</b>) hourly, (<b>f</b>–<b>j</b>) daily, and (<b>k</b>–<b>o</b>) monthly observed and simulated discharges at PIS gauge-stations; using four SPPs with bias correction and an ensemble mean streamflow scenario.</p>
Full article ">Figure 8
<p>Observed and simulated hourly discharges at (<b>a</b>–<b>e</b>) SAL, (<b>f</b>–<b>j</b>) CHI, and (<b>k</b>–<b>o</b>) INT gauge-stations; using four SPPs with bias correction and an ensemble mean streamflow scenario.</p>
Full article ">Figure 9
<p>(<b>a</b>) Kling–Gupta efficiency (KGE), (<b>b</b>) Mean Absolute Relative Error (MARE), (<b>c</b>) Percentage Bias (PBIAS), and (<b>d</b>) RMSE values for evaluating the hydrological performance of the GR4H model at SAL, CHI, and INT gauge-stations during the verification period; using four SPPs with bias correction and an ensemble mean streamflow scenario.</p>
Full article ">Figure 10
<p>Example of simulated discharges in 11 Vilcanota’s river streams for the 22:00 and 02:00 hours from 6 and 7 February 2020, respectively; using four SPPs as meteorological forcing data of the GR4H model.</p>
Full article ">
29 pages, 7005 KiB  
Article
The SARSense Campaign: Air- and Space-Borne C- and L-Band SAR for the Analysis of Soil and Plant Parameters in Agriculture
by David Mengen, Carsten Montzka, Thomas Jagdhuber, Anke Fluhrer, Cosimo Brogi, Stephani Baum, Dirk Schüttemeyer, Bagher Bayat, Heye Bogena, Alex Coccia, Gerard Masalias, Verena Trinkel, Jannis Jakobi, François Jonard, Yueling Ma, Francesco Mattia, Davide Palmisano, Uwe Rascher, Giuseppe Satalino, Maike Schumacher, Christian Koyama, Marius Schmidt and Harry Vereeckenadd Show full author list remove Hide full author list
Remote Sens. 2021, 13(4), 825; https://doi.org/10.3390/rs13040825 - 23 Feb 2021
Cited by 19 | Viewed by 5022
Abstract
With the upcoming L-band Synthetic Aperture Radar (SAR) satellite mission Radar Observing System for Europe L-band SAR (ROSE-L) and its integration into existing C-band satellite missions such as Sentinel-1, multi-frequency SAR observations with high temporal and spatial resolution will become available. The SARSense [...] Read more.
With the upcoming L-band Synthetic Aperture Radar (SAR) satellite mission Radar Observing System for Europe L-band SAR (ROSE-L) and its integration into existing C-band satellite missions such as Sentinel-1, multi-frequency SAR observations with high temporal and spatial resolution will become available. The SARSense campaign was conducted between June and August 2019 to investigate the potential for estimating soil and plant parameters at the agricultural test site in Selhausen (Germany). It included C- and L-band air- and space-borne observations accompanied by extensive in situ soil and plant sampling as well as unmanned aerial system (UAS) based multispectral and thermal infrared measurements. In this regard, we introduce a new publicly available SAR data set and present the first analysis of C- and L-band co- and cross-polarized backscattering signals regarding their sensitivity to soil and plant parameters. Results indicate that a multi-frequency approach is relevant to disentangle soil and plant contributions to the SAR signal and to identify specific scattering mechanisms associated with the characteristics of different crop type, especially for root crops and cereals. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the Selhausen test site and airborne flight tracks (left) as well as the individual crop types and field IDs (right).</p>
Full article ">Figure 2
<p>Precipitation and temperature measurements of 2019 compared to the long-term average (29 years). Within the period of the SARSense campaign, it was both hotter and dryer than average.</p>
Full article ">Figure 3
<p>Cessna C208 carrying the SARSense radar setup. The top left and bottom right antennas are for L-band, the bottom left and top right antennas for C-band.</p>
Full article ">Figure 4
<p>Comparison of airborne C- and L-band data with Sentinel-1 and ALOS-2 over the Selhausen test site for the 21/22 June.</p>
Full article ">Figure 5
<p>Normalized Difference Red-Edge (NDRE) index measured by the Micasense RedEdge-M multispectral sensor (left) and surface temperature (°C) measured by the FLIR VUE Pro 640 thermal infrared camera (right) on the 27 June 2019.</p>
Full article ">Figure 6
<p>Soil moisture sampling points for the 21 June (left) and plant sampling points for the 25 June and 7 August (right).</p>
Full article ">Figure 7
<p>Comparison between unfiltered and speckle filtered SAR image for C-band (left) and L-band (right) for HH polarization from 19 June.</p>
Full article ">Figure 8
<p>Temporal behavior of backscattering signals of C-band air- and space-borne data for the flight tracks A, B, and C.</p>
Full article ">Figure 9
<p>Temporal behavior of backscattering signals of L-band air- and space-borne data for the flight tracks A, B and C.</p>
Full article ">Figure 10
<p>Scatter plots between soil moisture and backscattering signal from co- and cross-polarized channels of C- and L-band satellite data.</p>
Full article ">Figure 11
<p>Scatter plots between vegetation water content (left) and plant height (right) and backscattering signal from co- and cross-polarized channels of C- and L-band satellite data.</p>
Full article ">Figure 12
<p>Observed and detrended range profile (left) of field F11, backscattering signal in decibels (dB) for irrigated and non-irrigated area (middle) and related histograms from both areas (right).</p>
Full article ">Figure 13
<p>Scatter plots between Normalized Difference Red-Edge (NDRE) index and backscattering signal from co- and cross-polarized channels of C- and L-band airborne data.</p>
Full article ">
14 pages, 2123 KiB  
Article
The Impact of Shale Oil and Gas Development on Rangelands in the Permian Basin Region: An Assessment Using High-Resolution Remote Sensing Data
by Haoying Wang
Remote Sens. 2021, 13(4), 824; https://doi.org/10.3390/rs13040824 - 23 Feb 2021
Cited by 9 | Viewed by 3517
Abstract
The environmental impact of shale energy development is a growing concern in the US and worldwide. Although the topic is well-studied in general, shale development’s impact on drylands has received much less attention in the literature. This study focuses on the effect of [...] Read more.
The environmental impact of shale energy development is a growing concern in the US and worldwide. Although the topic is well-studied in general, shale development’s impact on drylands has received much less attention in the literature. This study focuses on the effect of shale development on land cover in the Permian Basin region—a unique arid/semi-arid landscape experiencing an unprecedented intensity of drilling and production activities. By taking advantage of the high-resolution remote sensing land cover data, we develop a fixed-effects panel (longitudinal) data regression model to control unobserved spatial heterogeneities and regionwide trends. The model allows us to understand the land cover’s dynamics over the past decade of shale development. The results show that shale development had moderate negative but statistically significant impacts on shrubland and grassland/pasture. The effect is more strongly associated with the hydrocarbon production volume and less with the number of oil and gas wells drilled. Between shrubland and grassland/pasture, the impact on shrubland is more pronounced in terms of magnitude. The dominance of shrubland in the region likely explains the result. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Agroforestry)
Show Figures

Figure 1

Figure 1
<p>Land cover change (<b>a</b>) and energy production (<b>b</b>) in Lea and Eddy Counties, New Mexico. Data sources: (<b>a</b>) computed from the 30 m Crop Data Layer (CDL) by the National Agricultural Statistics Service (NASS); (<b>b</b>) computed from well-level data provided by the Petroleum Recovery Research Center at New Mexico Tech. Note: (1) one MCF = one thousand cubic feet; (2) The sudden decline in developed land and the corresponding jump in cropland in 2010 is likely caused by the classification algorithm that processed the CDL [<a href="#B21-remotesensing-13-00824" class="html-bibr">21</a>].</p>
Full article ">Figure 2
<p>Study area overlapped with recoded CDL and well locations (as of 2017). Data sources: CDL by the National Agricultural Statistics Service (NASS) and well location data from the Petroleum Recovery Research Center at New Mexico Tech. Note: plain (white) CDL pixels represent other land uses (mostly agriculture and developed land uses) being removed.</p>
Full article ">Figure 3
<p>Recoded land cover and well locations within one PRISM pixel (4 × 4 km). Data sources: CDL by the NASS and well location data from the Petroleum Recovery Research Center at New Mexico Tech. Note: plain (white) CDL pixels represent other land uses (mostly agriculture and developed land uses) being removed. The CDL raster layer is from 2010. The PRISM pixel centers around coordinates [32°52′30″N, 104°5′2.4″W] using the WGS 1984 Datum (used in Google Maps as well).</p>
Full article ">
16 pages, 5040 KiB  
Article
High-Accuracy Real-Time Kinematic Positioning with Multiple Rover Receivers Sharing Common Clock
by Lin Zhao, Jiachang Jiang, Liang Li, Chun Jia and Jianhua Cheng
Remote Sens. 2021, 13(4), 823; https://doi.org/10.3390/rs13040823 - 23 Feb 2021
Cited by 1 | Viewed by 2197
Abstract
Since the traditional real-time kinematic positioning method is limited by the reduced satellite visibility from the deprived navigational environments, we, therefore, propose an improved RTK method with multiple rover receivers sharing a common clock. The proposed method can enhance observational redundancy by blending [...] Read more.
Since the traditional real-time kinematic positioning method is limited by the reduced satellite visibility from the deprived navigational environments, we, therefore, propose an improved RTK method with multiple rover receivers sharing a common clock. The proposed method can enhance observational redundancy by blending the observations from each rover receiver together so that the model strength will be improved. Integer ambiguity resolution of the proposed method is challenged in the presence of several inter-receiver biases (IRB). The IRB including inter-receiver code bias (IRCB) and inter-receiver phase bias (IRPB) is calibrated by the pre-estimation method because of their temporal stability. Multiple BeiDou Navigation Satellite System (BDS) dual-frequency datasets are collected to test the proposed method. The experimental results have shown that the IRCB and IRPB under the common clock mode are sufficiently stable for the ambiguity resolution. Compared with the traditional method, the ambiguity resolution success rate and positioning accuracy of the proposed method can be improved by 19.5% and 46.4% in the restricted satellite visibility environments. Full article
(This article belongs to the Special Issue Positioning and Navigation in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic scenario of two rover-receivers equipped platform, Ant 1 and Ant 2 denote antennas for rover receivers <span class="html-italic">r</span><sub>1</sub> and <span class="html-italic">r</span><sub>2</sub>, respectively.</p>
Full article ">Figure 2
<p>Receivers and antennas setup.</p>
Full article ">Figure 3
<p>24 h-time series of estimation results with Septentrio PolaRx5 receivers. The panels from left to right are the inter-receiver phase bias (IRPB) before and after eliminating the jumps. The subplots from top to bottom correspond to pivot satellites visibility, IRPB at B1 and B2 frequency, respectively. The single differenced ambiguities belonged to Sat.a and Sat.b form the ambiguity Nab, which is coupled in the IRPB. As an example, the hexagram marker specifies the time to alter.</p>
Full article ">Figure 4
<p>Estimation results for inter-receiver code bias (IRCB) and IRPB under non-common clock mode. The panels from left to right are the IRB for UB380 receiver and K708 receiver. The subplots from top to bottom correspond to IRCB and IRPB at B1 and B2 frequency, and the blue-green dotted line is the corresponding mean of IRB.</p>
Full article ">Figure 5
<p>The estimation results for IRCB and IRPB under non-common clock mode. The receiver types are all UB380. The panels from top to bottom represent IRCB and IRPB at B1 and B2 frequency, and the blue-green dotted line is the corresponding mean of IRB.</p>
Full article ">Figure 6
<p>The estimation results of IRCB and IRPB under common clock mode. Each row corresponds to B1 and B2 frequency from top to bottom, and the blue-green dotted line is the corresponding mean of IRB.</p>
Full article ">Figure 7
<p>IRCB and IRPB estimation results once restart of the receiver. The panels from left to right represent IRCB and IRPB, whose rows from top to bottom denote B1 and B2, respectively.</p>
Full article ">Figure 8
<p>The sky-plot of <span class="html-italic">r</span><sub>1</sub> and <span class="html-italic">r</span><sub>2</sub>. Various colors denote different satellite tracking situations in the simulation, respectively. The light blue area denotes that the satellite signal tracked by <span class="html-italic">r</span><sub>1</sub> and the yellow represents tracked signals for <span class="html-italic">r</span><sub>2</sub>, respectively.</p>
Full article ">Figure 9
<p>Positioning results with different cutoff elevation angles, in which <span class="html-italic">r</span><sub>1</sub> and <span class="html-italic">r</span><sub>2</sub> can track satellites whose azimuth masks are 200° and 160°. The column panels from left to right represent different cutoff elevation angles (10°, 15°, 20°, 25°, 30°), respectively. The row panels from top to bottom are tracked satellite numbers, PDOP, ADOP (the ADOP is depicted in blue color, the 0.12-cycles level as orange dash line), and errors conditioned on PDOP&lt;30 in east-north-up direction for the RTK positioning method using non-common clock receivers (NC-RTK) and the RTK positioning method with multiple rover receivers sharing the common clock (C-RTK), respectively.</p>
Full article ">
27 pages, 32026 KiB  
Article
Interdecadal Changes in Aerosol Optical Depth over Pakistan Based on the MERRA-2 Reanalysis Data during 1980–2018
by Rehana Khan, Kanike Raghavendra Kumar, Tianliang Zhao, Waheed Ullah and Gerrit de Leeuw
Remote Sens. 2021, 13(4), 822; https://doi.org/10.3390/rs13040822 - 23 Feb 2021
Cited by 24 | Viewed by 3771
Abstract
The spatiotemporal evolution and trends in aerosol optical depth (AOD) over environmentally distinct regions in Pakistan are investigated for the period 1980–2018. The AOD data for this period was obtained from the Modern-era retrospective analysis for research and applications, version 2 (MERRA-2) reanalysis [...] Read more.
The spatiotemporal evolution and trends in aerosol optical depth (AOD) over environmentally distinct regions in Pakistan are investigated for the period 1980–2018. The AOD data for this period was obtained from the Modern-era retrospective analysis for research and applications, version 2 (MERRA-2) reanalysis atmospheric products, together with the Moderate-resolution imaging spectroradiometer (MODIS) retrievals. The climatology of AODMERRA-2 is analyzed in three different contexts: the entire study domain (Pakistan), six regions within the domain, and 12 cities chosen from the entire study domain. The time-series analysis of the MODIS and MERRA-2 AOD data shows similar patterns in individual cities. The AOD and its seasonality vary strongly across Pakistan, with the lowest (0.05 ± 0.04) and highest (0.40 ± 0.06) in the autumn and summer seasons over the desert and the coastal regions, respectively. During the study period, the annual AOD trend increased between 0.002 and 0.012 year−1. The increase of AOD is attributed to an increase in population and emissions from natural and/or anthropogenic sources. A general increase in the annual AOD over the central to lower Indus Basin is ascribed to the large contribution of dust particles from the desert. During winter and spring, a significant decrease in the AOD was observed in the northern regions of Pakistan. The MERRA-2 and MODIS trends (2002–2018) were compared, and the results show visible differences between the AOD datasets due to theuseof different versions and collection methods. Overall, the present study provides insight into the regional differences of AOD and its trends with the pronounced seasonal behavior across Pakistan. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Topographic map of the study area, Pakistan, with the elevation represented with the color scale at the bottom left. The areas enclosed in red boxes represent major regions in the study domain, while the locations marked with blue solid circles denote the major cities in the subdomain of the study region. (<b>b</b>) A geographical map in the right panel showing the location of Pakistan in South Asia with its bordering countries, including potential source desert regions, the direction of air masses towards the study region is marked with an arrow. Different colors differentiate the desert area from the normal land cover type.</p>
Full article ">Figure 2
<p>Scatter plot of MERRA-2 versus MODIS-Aqua AOD for (<b>a</b>) DB, (<b>b</b>) DT, and (<b>c</b>) DTB (merged) data product over the study domain Pakistan, for the period (2002–2018). The data points are collocated monthly mean data, indicated by the solid circles. The red line indicates a linear least-square fit the data points. The number of matched data points (N), (R) Pearson’s correlation coefficient, Bias, and root-mean-square error (RMSE) is also indicated in all the panels.</p>
Full article ">Figure 3
<p>Autocorrelation of annual aerosol optical depth (AOD550) (Modern-era retrospective analysis for research and applications, version 2 (MERRA-2)) time-series as a function of the lag, covering an annual mean data of 20 years period (1980–1999), evaluated for six main study regions (<b>a</b>) North Dry Region (NDR), (<b>b</b>) Katawaz Basin (<b>c</b>) Indus Basin (<b>d</b>) Balochistan Plateau (<b>e</b>) Coastal region and (<b>f</b>) Kharan Desert Region (KDR) covering the study domain of Pakistan.</p>
Full article ">Figure 4
<p>Annual and seasonal spatial distributions of AOD averaged for 39 years (1980–2018) retrieved from the MERRA-2 (<b>a</b>–<b>i</b>) and Moderate-resolution imaging spectroradiometer (MODIS)-Aqua (DB) (<b>b</b>–<b>j</b>; 2002–2018) over the study domain, Pakistan. The blank spaces in the figure are areas for which no MODIS data are available.</p>
Full article ">Figure 5
<p>Month-to-month variation of the AOD averaged over the years 2002–2018, for 12 cities in Pakistan (<b>a</b>) Gilgit, Muzafarabad (<b>b</b>) Peshawar, D.I.Khan (<b>c</b>) Lahore, Multan (<b>d</b>) Quetta, Sibbi (<b>e</b>) Chhor, Karachi (<b>f</b>) Khuzdar, Panjgur for both MERRA-2 (red-colored histograms) and MODIS-DB (blue colored histograms). Note that the yellow bars are plotted on top of the blue bars. The blank columns in figure(<b>a</b>) are areas for which no MODIS-DB data are available. Note that the scales are different for each figure.</p>
Full article ">Figure 6
<p>Temporal variation and linear trends in the annual mean AOD for the cities considered in this study for the years 1980-2018: the blue line shows the monthly mean AOD550, and the solid red line indicates the linear trend fitted to these data (Ya: 1980–2000; Yb: 2001–2018). The intercept and slope of the fit lines are shown at the top right of each figure. City names (abbreviations, see <a href="#remotesensing-13-00822-t002" class="html-table">Table 2</a>) are indicated in the top left of each figure.</p>
Full article ">Figure 6 Cont.
<p>Temporal variation and linear trends in the annual mean AOD for the cities considered in this study for the years 1980-2018: the blue line shows the monthly mean AOD550, and the solid red line indicates the linear trend fitted to these data (Ya: 1980–2000; Yb: 2001–2018). The intercept and slope of the fit lines are shown at the top right of each figure. City names (abbreviations, see <a href="#remotesensing-13-00822-t002" class="html-table">Table 2</a>) are indicated in the top left of each figure.</p>
Full article ">Figure 7
<p>Temporal variation and linear trends in monthly mean AOD, in four different seasons during 1980–2018, for the six main study domains in subfigures (<b>a</b>), (<b>g</b>), (<b>m</b>), (<b>s</b>) (NDR), (<b>b</b>), (<b>h</b>), (<b>n</b>), (<b>t</b>) (Katawaz Basin), (<b>c</b>), (<b>i</b>), (<b>o</b>), (<b>u</b>) (Indus Basin) (<b>d</b>), (<b>j</b>), (<b>p</b>), (<b>v</b>) (Balochistan Plateau), (<b>e</b>), (<b>k</b>), (<b>q</b>), (<b>w</b>) (Coastal region), and (<b>f</b>), (<b>l</b>), (<b>r</b>), (<b>x</b>) (KDR). The red lines show the monthly averaged AOD time-series, and the solid blue lines show the linear trend. The fitted trend lines are shown in the top right corner of each plot.</p>
Full article ">Figure 7 Cont.
<p>Temporal variation and linear trends in monthly mean AOD, in four different seasons during 1980–2018, for the six main study domains in subfigures (<b>a</b>), (<b>g</b>), (<b>m</b>), (<b>s</b>) (NDR), (<b>b</b>), (<b>h</b>), (<b>n</b>), (<b>t</b>) (Katawaz Basin), (<b>c</b>), (<b>i</b>), (<b>o</b>), (<b>u</b>) (Indus Basin) (<b>d</b>), (<b>j</b>), (<b>p</b>), (<b>v</b>) (Balochistan Plateau), (<b>e</b>), (<b>k</b>), (<b>q</b>), (<b>w</b>) (Coastal region), and (<b>f</b>), (<b>l</b>), (<b>r</b>), (<b>x</b>) (KDR). The red lines show the monthly averaged AOD time-series, and the solid blue lines show the linear trend. The fitted trend lines are shown in the top right corner of each plot.</p>
Full article ">Figure 8
<p>Spatial distribution of annual and seasonal mean AOD trend from MODIS-DB (<b>d</b>) 2002–2018 andMERRA-2 during (<b>a</b>) 1980–2018 and for the (<b>b</b>) pre-MODIS (1980–2000) and (<b>c</b>) post-MODIS (2002–2018) periods observed over the study domain, Pakistan and the surrounding region. The shaded color represents the sign and magnitude of trend, the confidence levels (95%) are shown as black dots in each of the subplots. The blank spaces in figure (<b>d</b>) are areas for which no MODIS data are available.</p>
Full article ">Figure 9
<p>Hovmoller (time-series versus longitude) diagrams of MERRA-2 retrieved AOD showing the movement of the columnar aerosol load and the associated gradient in time and longitude parameters for different regions (<b>a</b>–<b>f</b>) for the period 1980–2018.</p>
Full article ">Figure 10
<p>Variation of AOD<sub>MERRA-2</sub> with time (2000–2018) for different study regions with different elevations (indicated in the top of each figure) and with population density(unit/km<sup>2</sup>) on the secondary y-axis (<b>a</b>) NDR, (<b>b</b>) Katawaz Basin (<b>c</b>) Indus Basin (<b>d</b>) Balochistan Plateau (<b>e</b>) Coastal region and (<b>f</b>) KDR. The red data points and line indicate the AOD;green indicates population density.</p>
Full article ">Figure 11
<p>Probability density distribution functions (PDFs) of monthly mean AOD for different periods (11 years for 1980–1990, 8 years for 2011–2018, 10 years for 1991–2000 and 2001–2010) during the study period (1980–2018) (see legend in Figure (<b>a</b>), for different study regions,(<b>a</b>) NDR, (<b>b</b>)Katawaz Basin (<b>c</b>) Indus Basin (<b>d</b>) Balochistan Plateau (<b>e</b>) coastal region and (<b>f</b>) KDR.</p>
Full article ">
20 pages, 5271 KiB  
Article
Improving the Selection of Vegetation Index Characteristic Wavelengths by Using the PROSPECT Model for Leaf Water Content Estimation
by Jian Yang, Yangyang Zhang, Lin Du, Xiuguo Liu, Shuo Shi and Biwu Chen
Remote Sens. 2021, 13(4), 821; https://doi.org/10.3390/rs13040821 - 23 Feb 2021
Cited by 9 | Viewed by 2593
Abstract
Equivalent water thickness (EWT) is a major indicator for indirect monitoring of leaf water content in remote sensing. Many vegetation indices (VIs) have been proposed to estimate EWT based on passive or active reflectance spectra. However, the selection of the characteristics wavelengths of [...] Read more.
Equivalent water thickness (EWT) is a major indicator for indirect monitoring of leaf water content in remote sensing. Many vegetation indices (VIs) have been proposed to estimate EWT based on passive or active reflectance spectra. However, the selection of the characteristics wavelengths of VIs is mainly based on statistical analysis for specific vegetation species. In this study, a characteristic wavelength selection algorithm based on the PROSPECT-5 model was proposed to obtain characteristic wavelengths of leaf biochemical parameters (leaf structure parameter (N), chlorophyll a + b content (Cab), carotenoid content (Car), EWT, and dry matter content (LMA)). The effect of combined characteristic wavelengths of EWT and different biochemical parameters on the accuracy of EWT estimation is discussed. Results demonstrate that the characteristic wavelengths of leaf structure parameter N exhibited the greatest influence on EWT estimation. Then, two optimal characteristics wavelengths (1089 and 1398 nm) are selected to build a new ratio VI (nRVI = R1089/R1398) for EWT estimation. Subsequently, the performance of the built nRVI and four optimal published VIs for EWT estimation are discussed by using two simulation datasets and three in situ datasets. Results demonstrated that the built nRVI exhibited better performance (R2 = 0.9284, 0.8938, 0.7766, and RMSE = 0.0013 cm, 0.0022 cm, 0.0030 cm for ANGERS, Leaf Optical Properties Experiment (LOPEX), and JR datasets, respectively.) than that the published VIs for EWT estimation. It is demonstrated that the built nRVI based on the characteristic wavelengths selected using the physical model exhibits desirable universality and stability in EWT estimation. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>The workflow chart of the characteristic wavelengths selection algorithm.</p>
Full article ">Figure 2
<p>Position of selected wavelengths of N and EWT in the sensitivity analysis for PROSPECT model for reflectance (N: structure index, Cab: chlorophyll, Car: carotenoid, EWT: equivalent water thickness, LMA: dry matter per area, Interactions: joint global sensitivity for different combinations of parameters).</p>
Full article ">Figure 3
<p>Correlation between the spectral information of the characteristic wavelengths of N + EWT and EWT parameter by using the synthetic dataset without noise.</p>
Full article ">Figure 4
<p>Correlations between EWT and selected published vegetation indices (VIs) for different datasets: (<b>a</b>) synthetic spectrum without noise, (<b>b</b>) synthetic spectrum with 2% random Gaussian noise, (<b>c</b>) ANGERS, (<b>d</b>) LOPEX, and (<b>e</b>) JR.</p>
Full article ">Figure 5
<p>Correlations between EWT and nRVI for the different datasets: (<b>a</b>) synthetic spectrum without noise, (<b>b</b>) synthetic spectrum with 2% random Gaussian noise, (<b>c</b>) ANGERS, (<b>d</b>) LOPEX, and (<b>e</b>) JR.</p>
Full article ">Figure 6
<p>Robustness of nRVI, WBI, MSI, SR, and NDII to variations in N across various EWT values.</p>
Full article ">Figure 7
<p>Accuracy of nRVI, WBI, MSI, SR, and NDII for EWT estimation across various N values based on the synthetic dataset with 2% Gaussian noise.</p>
Full article ">Figure 8
<p>Robustness of nRVI, WBI, MSI, SR, and NDII to variations in LMA across various EWT values.</p>
Full article ">Figure 9
<p>Accuracy of nRVI, WBI, MSI, SR, and NDII for EWT estimation across various LMA contents based on the synthetic dataset with 2% Gaussian noise.</p>
Full article ">Figure 10
<p>Accuracy of nRVI, WBI, MSI, SR, and NDII for EWT estimation across various spectral noise based on synthetic datasets with various random Gaussian noise.</p>
Full article ">
16 pages, 6112 KiB  
Technical Note
Multiscale Weighted Adjacent Superpixel-Based Composite Kernel for Hyperspectral Image Classification
by Yaokang Zhang and Yunjie Chen
Remote Sens. 2021, 13(4), 820; https://doi.org/10.3390/rs13040820 - 23 Feb 2021
Cited by 7 | Viewed by 2349
Abstract
This paper presents a composite kernel method (MWASCK) based on multiscale weighted adjacent superpixels (ASs) to classify hyperspectral image (HSI). The MWASCK adequately exploits spatial-spectral features of weighted adjacent superpixels to guarantee that more accurate spectral features can be extracted. Firstly, we use [...] Read more.
This paper presents a composite kernel method (MWASCK) based on multiscale weighted adjacent superpixels (ASs) to classify hyperspectral image (HSI). The MWASCK adequately exploits spatial-spectral features of weighted adjacent superpixels to guarantee that more accurate spectral features can be extracted. Firstly, we use a superpixel segmentation algorithm to divide HSI into multiple superpixels. Secondly, the similarities between each target superpixel and its ASs are calculated to construct the spatial features. Finally, a weighted AS-based composite kernel (WASCK) method for HSI classification is proposed. In order to avoid seeking for the optimal superpixel scale and fuse the multiscale spatial features, the MWASCK method uses multiscale weighted superpixel neighbor information. Experiments from two real HSIs indicate that superior performance of the WASCK and MWASCK methods compared with some popular classification methods. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Different spatial region selection. (<b>a</b>) Square; (<b>b</b>) Superpixel; (<b>c</b>) Weighted adjacent superpixels (ASs). (<b>d</b>) Features of square region; (<b>e</b>) Features of superpixel region; (<b>f</b>) Features of weighted ASs region. Note that the greener the color in (<b>d</b>–<b>f</b>), the more accurate the features.</p>
Full article ">Figure 2
<p>Flowchart of the proposed composite kernel based on multiscale weighted adjacent superpixel (MWASCK) method.</p>
Full article ">Figure 3
<p>Classification maps for the Indian Pines image. (<b>a</b>) Ground truth; (<b>b</b>) SVM-RBF; (<b>c</b>) SVMCK; (<b>d</b>) SCMK; (<b>e</b>) RMK; (<b>f</b>) ASMGSSK; (<b>g</b>) WASCK; (<b>h</b>) MWASCK.</p>
Full article ">Figure 4
<p>Classification maps for the University of Pavia image. (<b>a</b>) Ground truth; (<b>b</b>) SVM-RBF; (<b>c</b>) SVMCK; (<b>d</b>) SCMK; (<b>e</b>) RMK; (<b>f</b>) ASMGSSK; (<b>g</b>) WASCK; (<b>h</b>) MWASCK.</p>
Full article ">Figure 4 Cont.
<p>Classification maps for the University of Pavia image. (<b>a</b>) Ground truth; (<b>b</b>) SVM-RBF; (<b>c</b>) SVMCK; (<b>d</b>) SCMK; (<b>e</b>) RMK; (<b>f</b>) ASMGSSK; (<b>g</b>) WASCK; (<b>h</b>) MWASCK.</p>
Full article ">Figure 5
<p>OA values of WASCK under different number of superpixels. (<b>a</b>) Indian Pines; (<b>b</b>) University of Pavia.</p>
Full article ">Figure 6
<p>OA values of single-scale and multi-scale MWASCK under different training samples. (<b>a</b>) Indian Pines; (<b>b</b>) University of Pavia.</p>
Full article ">Figure 7
<p>OA values under different spectral kernel weights. (<b>a</b>) WASCK; (<b>b</b>) MWASCK.</p>
Full article ">Figure 8
<p>OA values under different kernel width <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>w</mi> </msub> </mrow> </semantics></math>. (<b>a</b>) Indian Pines; (<b>b</b>) University of Pavia.</p>
Full article ">Figure 9
<p>Effect of training sample numbers for different algorithms. (<b>a</b>) Indian Pines; (<b>b</b>) University of Pavia.</p>
Full article ">
19 pages, 1762 KiB  
Article
TNNG: Total Nuclear Norms of Gradients for Hyperspectral Image Prior
by Ryota Yuzuriha, Ryuji Kurihara, Ryo Matsuoka and Masahiro Okuda
Remote Sens. 2021, 13(4), 819; https://doi.org/10.3390/rs13040819 - 23 Feb 2021
Cited by 4 | Viewed by 2358
Abstract
We introduce a novel regularization function for hyperspectral image (HSI), which is based on the nuclear norms of gradient images. Unlike conventional low-rank priors, we achieve a gradient-based low-rank approximation by minimizing the sum of nuclear norms associated with rotated planes in the [...] Read more.
We introduce a novel regularization function for hyperspectral image (HSI), which is based on the nuclear norms of gradient images. Unlike conventional low-rank priors, we achieve a gradient-based low-rank approximation by minimizing the sum of nuclear norms associated with rotated planes in the gradient of a HSI. Our method explicitly and simultaneously exploits the correlation in the spectral domain as well as the spatial domain. Our method exploits the low-rankness of a global region to enhance the dimensionality reduction by the prior. Since our method considers the low-rankness in the gradient domain, it more sensitively detects anomalous variations. Our method achieves high-fidelity image recovery using a single regularization function without the explicit use of any sparsity-inducing priors such as 0, 1 and total variation (TV) norms. We also apply this regularization to a gradient-based robust principal component analysis and show its superiority in HSI decomposition. To demonstrate, the proposed regularization is validated on a variety of HSI reconstruction/decomposition problems with performance comparisons to state-of-the-art methods its superior performance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>An example for the diagonal matrix <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">S</mi> <mi>h</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> of <span class="html-italic">PaviaU</span> in which the diagonal elements are singular values of <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">G</mi> <mi>h</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>. The three images show <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">S</mi> <mi>h</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> of the original input, noisy image, and restoration results, respectively. (<b>a</b>) original, (<b>b</b>) noisy, (<b>c</b>) restoration.</p>
Full article ">Figure 2
<p>Gradient images and rotation by <math display="inline"><semantics> <msubsup> <mrow> <mi mathvariant="bold">P</mi> </mrow> <mi>v</mi> <mo>′</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mrow> <mi mathvariant="bold">P</mi> </mrow> <mi>h</mi> <mo>′</mo> </msubsup> </semantics></math>.</p>
Full article ">Figure 3
<p>Results for CS reconstruction experiments in it Stanford (130th band). (<b>a</b>) original image, (<b>b</b>) observation, (<b>c</b>) ASTV, (<b>d</b>) SSTV, (<b>e</b>) HSSTV, (<b>f</b>) TNNG (ours)</p>
Full article ">Figure 4
<p>Dead lines/circles are generated by pixel-wise multiplication with mask. (<b>a</b>) original image, (<b>b</b>) mask, (<b>c</b>) generated image</p>
Full article ">Figure 5
<p>Results for GRPCA in <span class="html-italic">Washington</span> (76th band). (<b>a</b>) Input image, (<b>b</b>) Image with anomalies, (<b>c</b>) Low-rank component, (<b>d</b>) Sparse component</p>
Full article ">Figure 6
<p>Data for Pansharpening in <span class="html-italic">PaviaC</span> ( Type I ). (<b>a</b>)original image, (<b>b</b>) pan image, (<b>c</b>) HSI.</p>
Full article ">Figure 7
<p>Results for Pansharpening experiments in <span class="html-italic">PaviaC</span>. (<b>a</b>) original image, (<b>b</b>) GFPCA, (<b>c</b>) BayesNaive, (<b>d</b>) BayesSparse, (<b>e</b>) HySure, (<b>f</b>) TNNG (ours).</p>
Full article ">
23 pages, 4126 KiB  
Article
Upscaling Northern Peatland CO2 Fluxes Using Satellite Remote Sensing Data
by Sofia Junttila, Julia Kelly, Natascha Kljun, Mika Aurela, Leif Klemedtsson, Annalea Lohila, Mats B. Nilsson, Janne Rinne, Eeva-Stiina Tuittila, Patrik Vestin, Per Weslien and Lars Eklundh
Remote Sens. 2021, 13(4), 818; https://doi.org/10.3390/rs13040818 - 23 Feb 2021
Cited by 23 | Viewed by 5879
Abstract
Peatlands play an important role in the global carbon cycle as they contain a large soil carbon stock. However, current climate change could potentially shift peatlands from being carbon sinks to carbon sources. Remote sensing methods provide an opportunity to monitor carbon dioxide [...] Read more.
Peatlands play an important role in the global carbon cycle as they contain a large soil carbon stock. However, current climate change could potentially shift peatlands from being carbon sinks to carbon sources. Remote sensing methods provide an opportunity to monitor carbon dioxide (CO2) exchange in peatland ecosystems at large scales under these changing conditions. In this study, we developed empirical models of the CO2 balance (net ecosystem exchange, NEE), gross primary production (GPP), and ecosystem respiration (ER) that could be used for upscaling CO2 fluxes with remotely sensed data. Two to three years of eddy covariance (EC) data from five peatlands in Sweden and Finland were compared to modelled NEE, GPP and ER based on vegetation indices from 10 m resolution Sentinel-2 MSI and land surface temperature from 1 km resolution MODIS data. To ensure a precise match between the EC data and the Sentinel-2 observations, a footprint model was applied to derive footprint-weighted daily means of the vegetation indices. Average model parameters for all sites were acquired with a leave-one-out-cross-validation procedure. Both the GPP and the ER models gave high agreement with the EC-derived fluxes (R2 = 0.70 and 0.56, NRMSE = 14% and 15%, respectively). The performance of the NEE model was weaker (average R2 = 0.36 and NRMSE = 13%). Our findings demonstrate that using optical and thermal satellite sensor data is a feasible method for upscaling the GPP and ER of northern boreal peatlands, although further studies are needed to investigate the sources of the unexplained spatial and temporal variation of the CO2 fluxes. Full article
(This article belongs to the Special Issue Remote Sensing of Carbon Fluxes and Stocks)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Locations of the peatland EC flux measurement sites.</p>
Full article ">Figure 2
<p>(<b>a</b>) Linear regression for EC-derived mean daily GPP against daily EVI2; (<b>b</b>) linear regression for EC-derived mean daily GPP against the product of EVI2, moisture scalar W<sub>s</sub>, and daytime LST. All the sites and available years were included.</p>
Full article ">Figure 3
<p>Exponential regression between MODIS daytime LST and EC-derived ER at all the sites and years.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) Daily time series of EC-derived GPP (EC), modelled GPP using the LOOCV parameterization (RS joint), and modelled GPP with the site-specific parameters (RS site); (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) EC-derived GPP versus modelled remote sensing GPP. Black solid line is the 1:1 line, black dashed lines are the 1:2 and 2:1 lines.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) Daily time series of EC-derived GPP (EC), modelled GPP using the LOOCV parameterization (RS joint), and modelled GPP with the site-specific parameters (RS site); (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) EC-derived GPP versus modelled remote sensing GPP. Black solid line is the 1:1 line, black dashed lines are the 1:2 and 2:1 lines.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) Daily time series of EC-derived ER (EC), modelled ER using the LOOCV parametrization (RS joint) and modelled ER with the site-specific parameters (RS site); (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) EC-derived ER versus modelled remote sensing ER. Black solid line is the 1:1 line, black dashed lines are the 1:2 and 2:1 lines.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) Daily time series of EC-derived ER (EC), modelled ER using the LOOCV parametrization (RS joint) and modelled ER with the site-specific parameters (RS site); (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) EC-derived ER versus modelled remote sensing ER. Black solid line is the 1:1 line, black dashed lines are the 1:2 and 2:1 lines.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) Daily time series of EC-derived NEE (EC), modelled NEE using the LOOCV parametrization (RS joint) and modelled NEE with the site-specific parameters (RS site); (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) EC-derived NEE versus modelled remote sensing NEE. Black solid line is the 1:1 line, black dashed lines are the 1:2 and 2:1 lines.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) Daily time series of EC-derived NEE (EC), modelled NEE using the LOOCV parametrization (RS joint) and modelled NEE with the site-specific parameters (RS site); (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) EC-derived NEE versus modelled remote sensing NEE. Black solid line is the 1:1 line, black dashed lines are the 1:2 and 2:1 lines.</p>
Full article ">Figure 7
<p>Time series of cumulative NEE for Abisko-Stordalen in 2017, 2018, and 2019 for the original EC measured NEE (EC orig), the TIMESAT smoothed NEE (EC smooth), the non-linear regression model of NEE with joint parameters (RS joint), and the non-linear regression model of NEE with site-specific parameters (RS site). There is nearly complete correspondence between the original and the spline-smoothed EC data.</p>
Full article ">Figure 8
<p>Mean growing season GPP modelled using Equation (4) with LOOCV-parameterization and Sentinel-2 data as input during the growing season in 2017 (top left) and 2018 (top right) at the Lompolojänkkä site. The black lines show 80% of the annual EC flux footprint climatologies. The background image is an aerial photograph recorded in 2018 by the National Land Survey of Finland.</p>
Full article ">
14 pages, 3004 KiB  
Article
Diurnal Cycle of Passive Microwave Brightness Temperatures over Land at a Global Scale
by Zahra Sharifnezhad, Hamid Norouzi, Satya Prakash, Reginald Blake and Reza Khanbilvardi
Remote Sens. 2021, 13(4), 817; https://doi.org/10.3390/rs13040817 - 23 Feb 2021
Cited by 5 | Viewed by 3373
Abstract
Satellite-borne passive microwave radiometers provide brightness temperature (TB) measurements in a large spectral range which includes a number of frequency channels and generally two polarizations: horizontal and vertical. These TBs are widely used to retrieve several atmospheric and surface variables and parameters such [...] Read more.
Satellite-borne passive microwave radiometers provide brightness temperature (TB) measurements in a large spectral range which includes a number of frequency channels and generally two polarizations: horizontal and vertical. These TBs are widely used to retrieve several atmospheric and surface variables and parameters such as precipitation, soil moisture, water vapor, air temperature profile, and land surface emissivity. Since TBs are measured at different microwave frequencies with various instruments and at various incidence angles, spatial resolutions, and radiometric characteristics, a mere direct integration of them from different microwave sensors would not necessarily provide consistency. However, when appropriately harmonized, they can provide a complete dataset to estimate the diurnal cycle. This study first constructs the diurnal cycle of land TBs using the non-sun-synchronous Global Precipitation Measurement (GPM) Microwave Imager (GMI) observations by utilizing a cubic spline fit. The acquisition times of GMI vary from day to day and, therefore, the shape (amplitude and phase) of the diurnal cycle for each month is obtained by merging several days of measurements. This diurnal pattern is used as a point of reference when intercalibrated TBs from other passive microwave sensors with daily fixed acquisition times (e.g., Special Sensor Microwave Imager/Sounder, and Advanced Microwave Scanning Radiometer 2) are used to modify and tune the monthly diurnal cycle to daily diurnal cycle at a global scale. Since the GMI does not cover polar regions, the proposed method estimates a consistent diurnal cycle of land TBs at global scale. Results show that the shape and peak of the constructed TB diurnal cycle is approximately similar to the diurnal cycle of land surface temperature. The diurnal brightness temperature range for different land cover types has also been explored using the derived diurnal cycle of TBs. In general, a large diurnal TB range of more than 15 K has been observed for the grassland, shrubland, and tundra land cover types, whereas it is less than 5K over forests. Furthermore, seasonal variations in the diurnal TB range for different land cover types show a more consistent result over the Southern Hemisphere than over the Northern Hemisphere. The calibrated TB diurnal cycle may then be used to consistently estimate the diurnal cycle of land surface emissivity. Moreover, since changes in land surface emissivity are related to moisture change and freeze–thaw (FT) transitions in high-latitude regions, the results of this study enhance temporal detection of FT state, particularly during the transition times when multiple FT changes may occur within a day. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Percentage global land coverage of ten land cover types: namely, tropical/sub-tropical evergreen broad-leaved forest (LC01), deciduous forest (LC02), evergreen broad-leaved and needle-leaved forest (LC03), deciduous woodland (LC04), sclerophyllous woodland and forest (LC05), wooded and non-wooded grassland (LC06), tundra and mossy bog (LC07), boreal and xeromorphic shrubland (LC08), non-vegetated desert (LC09), and ice (LC10).</p>
Full article ">Figure 2
<p>Satellite-borne passive microwave brightness temperature (PMW TB) observations at a sample location (17.4°N and 28°E) with non-vegetated desert land cover for October 2016 using (<b>a</b>) SSMIS F16, F17, F18, and AMSR2 at 19.35 GHz vertical polarization; (<b>b</b>) GMI at 18.7 GHz vertical polarization; (<b>c</b>) SSMIS F16, F17, F18, and AMSR2 at 37 GHz horizontal polarization; (<b>d</b>) GMI at 36.5 GHz horizontal polarization. Time in each subplot is in local solar hours.</p>
Full article ">Figure 3
<p>Schematic to compute diurnal cycle of PMW TBs for pixels between 68°S and 68°N, at a sample location (25.1°S and 138.6°E) with non-vegetated desert land cover for 15 January 2016 at 18.7 GHz vertical polarization (<b>a</b>) based on monthly GMI data, and (<b>b</b>) corrected by AMSR2 and SSMIS daily data. Time in each subplot is in local solar hours.</p>
Full article ">Figure 4
<p>Diurnal variations of interpolated TB at 37 GHz vertical polarization for (<b>a</b>) 17.9°S and 36.4°E with wooded and non-wooded grassland land cover for June 16, 2016; (<b>b</b>) 8.1°S and 26.6°E with deciduous woodland land cover for 1 October 2016; (<b>c</b>) 17.4°N and 28.1°E with non-vegetated desert land cover for 9 February 2016; and (<b>d</b>) 27.4°N and 115.1°E with evergreen broad-leaved and needle-leaved forest land cover for 27 December 2016. Time in each subplot is in local solar hours.</p>
Full article ">Figure 5
<p>Steps for SSMIS/AMSR2 data smoothing for regions above latitude 68°N and below latitude 68°S at 89 GHz vertical polarization. (<b>a</b>) Initial GPM Microwave Imager (GMI) fit and its average for a sample location at 13.9°N and 14°W with deciduous woodland land cover for October 2016; (<b>b</b>) normalized GMI fit by taking away the average from the initial fit for the same location and month; (<b>c</b>) the average of all the GMI normalized plots for pixels with the same land cover and month; (<b>d</b>) smoothing of the original SSMIS/AMSR2 data for a sample location at 71.2°N and 103°E with the same land cover and month by applying the land cover-averaged normalized plot on the corresponding time for 10 October 2016. Time in each subplot is in local solar hours.</p>
Full article ">Figure 6
<p>Diurnal variations of interpolated TB out of smoothed SSMIS F16, F17, F18, and AMSR2 for sample locations with different land covers for 13 March 2016 at 18.7 GHz horizontal polarization for (<b>a</b>) 71.5°S and 12.9°E with ice land cover, (<b>b</b>) 73.7°N and 122°W with tundra and mossy bog land cover, (<b>c</b>) 71°N and 114°E with deciduous woodland land cover, and (<b>d</b>) 77°N and 106.4°E with non-vegetated desert land cover. Time in each subplot is in local solar hours.</p>
Full article ">Figure 7
<p>Comparison of constructed TB diurnal cycle for different PMW channels in two contrasting seasons at (<b>a</b>) 45.9°N and 103°W with boreal and xeromorphic shrubland land cover for 31 August 2016; and (<b>b</b>) 3.6°N and 142.7°W with wooded and non-wooded grassland land cover for 5 February 2016. Time in each subplot is in local hours.</p>
Full article ">Figure 8
<p>Spatial distributions of monthly diurnal brightness temperature range (DTR) (K) at (<b>a</b>,<b>b</b>) 37 GHz vertical polarization, and (<b>c</b>,<b>d</b>) 89 GHz vertical polarization for two contrasting seasons of (<b>a</b>,<b>c</b>) February 2016 and (<b>b</b>,<b>d</b>) August 2016.</p>
Full article ">Figure 9
<p>Mean monthly DTR averaged over different land cover types for two contrasting seasons of January 2016 and July 2016 for the (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) Northern Hemisphere, and the (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) Southern Hemisphere at (<b>a</b>,<b>b</b>) 19 GHz vertical polarization, (<b>c</b>,<b>d</b>) 23 GHz vertical polarization, (<b>e</b>,<b>f</b>) 37 GHz vertical polarization, and (<b>g</b>,<b>h</b>) 89 GHz vertical polarization.</p>
Full article ">Figure 10
<p>Global DTR (K) map (<b>a</b>) from initial GMI TB values, and (<b>b</b>) expanded to the polar region from GMI final interpolated TB values for June 2016 at 23 GHz vertical polarization.</p>
Full article ">
28 pages, 94494 KiB  
Article
Landsat and Sentinel-2 Based Burned Area Mapping Tools in Google Earth Engine
by Ekhi Roteta, Aitor Bastarrika, Magí Franquesa and Emilio Chuvieco
Remote Sens. 2021, 13(4), 816; https://doi.org/10.3390/rs13040816 - 23 Feb 2021
Cited by 46 | Viewed by 10207
Abstract
Four burned area tools were implemented in Google Earth Engine (GEE), to obtain regular processes related to burned area (BA) mapping, using medium spatial resolution sensors (Landsat and Sentinel-2). The four tools are (i) the BA Cartography tool for supervised burned area over [...] Read more.
Four burned area tools were implemented in Google Earth Engine (GEE), to obtain regular processes related to burned area (BA) mapping, using medium spatial resolution sensors (Landsat and Sentinel-2). The four tools are (i) the BA Cartography tool for supervised burned area over the user-selected extent and period, (ii) two tools implementing a BA stratified random sampling to select the scenes and dates for validation, and (iii) the BA Reference Perimeter tool to obtain highly accurate BA maps that focus on validating coarser BA products. Burned Area Mapping Tools (BAMTs) go beyond the previously implemented Burned Area Mapping Software (BAMS) because of GEE parallel processing capabilities and preloaded geospatial datasets. BAMT also allows temporal image composites to be exploited in order to obtain BA maps over a larger extent and longer temporal periods. The tools consist of four scripts executable from the GEE Code Editor. The tools’ performance was discussed in two case studies: in the 2019/2020 fire season in Southeast Australia, where the BA cartography detected more than 50,000 km2, using Landsat data with commission and omission errors below 12% when compared to Sentinel-2 imagery; and in the 2018 summer wildfires in Canada, where it was found that around 16,000 km2 had burned. Full article
(This article belongs to the Special Issue Remote Sensing of Burnt Area)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The red/green/blue natural color composition in a sample area in the 30TWN Sentinel-2 (S2) tile. The upper row (<b>a</b>–<b>c</b>) is a scene from 2020/02/03, where shadows were overcorrected, while the lower row (<b>d–f</b>) is a scene from 2020/01/09 without such that effect. The first column (<b>a</b>,<b>d</b>) shows the Level-1C (L1C) product with Top of Atmosphere (TOA) reflectances; the second column (<b>b</b>,<b>e</b>) shows the L2A product with Bottom of Atmosphere (BOA) reflectances as available in Google Earth Engine (GEE); and the third column (<b>c</b>,<b>f</b>) represents the Level-2A (L2A) product with no topographic correction generated by the authors.</p>
Full article ">Figure 2
<p>Total number of L1C and L2A scenes available in GEE by year, as of January 2021.</p>
Full article ">Figure 3
<p>Flowchart of the burned area (BA) Cartography tool’s algorithm.</p>
Full article ">Figure 4
<p>Generation of the BA product from S2 data in a sample area located in South Sudan. (<b>a</b>) Pre-fire composite derived from data between 2019/11/01 and 2019/12/31, with a Long SWIR/NIR/red color composition; (<b>b</b>) post-fire composite between 2020/01/01 and 2020/02/29 with the same color composition; (<b>c</b>) difference between the pre-fire and post-fire composites; (<b>d</b>) probability image returned by the Random Forest (RF) classifier; (<b>e</b>) burned seeds (in red) shown over the previous image; and (<b>f</b>) result exported in an ESRI (Environmental Systems Research Institute) Shapefile.</p>
Full article ">Figure 5
<p>Effect of the NIR band’s spatial resolution on the Normalized Burned Ratio (NBR) spectral index, in a sample area from 2020/01/13 located in South Sudan. (<b>a</b>) Shows the B8 band (NIR at 10 m); (<b>b</b>) the B8A band (NIR at 20 m); (<b>c</b>) the B12 band (Long SWIR at 20 m); (<b>d</b>) NBR at 10 m (derived from B8 and B12 bands); (<b>e</b>) NBR at 20 m (deriving from B8A and B12 bands); and (<b>f</b>) the Long SWIR/NIR/red color composition at 10 m. Both NBR indices were computed by using the same B12 band at 20 m.</p>
Full article ">Figure 6
<p>Location maps of the study areas in (<b>a</b>) Southeast Australia and (<b>b</b>) Canada.</p>
Full article ">Figure 7
<p>BA detected in SEA from Landsat data from September 2019 to March 2020.</p>
Full article ">Figure 8
<p>Validation areas of 50 × 50 km<sup>2</sup> that the validation area (VA) tool sampled, based on S2 scenes, and the BA from the map derived from Landsat data.</p>
Full article ">Figure 9
<p>Reference perimeters in tile 56JML. (<b>a</b>–<b>h</b>) Perimeters created between consecutive images, from 3 September 2019 to 21 January 2020; (<b>i</b>) final merged reference perimeter (RP).</p>
Full article ">Figure 10
<p>Temporal disagreement between the BA product and VIIRS (Visible Infrared Imaging Radiometer Suite) hotspots in SEA. A positive difference means the BA was detected later than the hotspot.</p>
Full article ">Figure 11
<p>Comparison between BA maps derived from Sentinel-2 data (<b>a</b>) and CWFIS (<b>b</b>) perimeters, in a sample area located in Central British Columbia.</p>
Full article ">Figure 12
<p>Comparison between the BA Sentinel-2 product generated by BAMT and CWFIS perimeters in three sample areas in Canada: the first row (<b>a</b>–<b>c</b>) is located in S2 tile 10UCF, west of Prince George (BC); the second row (<b>d</b>–<b>f</b>) in tile 17TNN, north of Gran Sudbury (ON); and the third row (<b>g</b>–<b>i</b>) in tile 10UGD, near Valemount (BC) in the Rocky Mountains. The first (<b>a</b>,<b>d</b>,<b>g</b>) and second (<b>b</b>,<b>e</b>,<b>h</b>) columns show the pre- and post-fire conditions, respectively, and the third column (<b>c</b>,<b>f</b>,<b>i</b>) represents the BA from the S2 product and CWFIS perimeters.</p>
Full article ">Figure 13
<p>Temporal disagreement between the BA product and VIIRS hotspots in Canada, measured in days. A positive difference means the BA was detected after the hotspot.</p>
Full article ">Figure 14
<p>Comparison of burned surfaces by several studies, in SEA and Canada. Note that the vertical axis is logarithmic, and that several regions are not represented because of insignificant BA: Jervis Bay Territory (JBT) in SEA, and NL, NB, NS and PE in Canada.</p>
Full article ">Figure A1
<p>Committed and omitted errors in the 10 validation areas in SEA.</p>
Full article ">Figure A1 Cont.
<p>Committed and omitted errors in the 10 validation areas in SEA.</p>
Full article ">
19 pages, 6977 KiB  
Article
InSAR Monitoring of Landslide Activity in Dominica
by Mary-Anne Fobert, Vern Singhroy and John G. Spray
Remote Sens. 2021, 13(4), 815; https://doi.org/10.3390/rs13040815 - 23 Feb 2021
Cited by 27 | Viewed by 5432
Abstract
Dominica is a geologically young, volcanic island in the eastern Caribbean. Due to its rugged terrain, substantial rainfall, and distinct soil characteristics, it is highly vulnerable to landslides. The dominant triggers of these landslides are hurricanes, tropical storms, and heavy prolonged rainfall events. [...] Read more.
Dominica is a geologically young, volcanic island in the eastern Caribbean. Due to its rugged terrain, substantial rainfall, and distinct soil characteristics, it is highly vulnerable to landslides. The dominant triggers of these landslides are hurricanes, tropical storms, and heavy prolonged rainfall events. These events frequently lead to loss of life and the need for a growing portion of the island’s annual budget to cover the considerable cost of reconstruction and recovery. For disaster risk mitigation and landslide risk assessment, landslide inventory and susceptibility maps are essential. Landslide inventory maps record existing landslides and include details on their type, location, spatial extent, and time of occurrence. These data are integrated (when possible) with the landslide trigger and pre-failure slope conditions to generate or validate a susceptibility map. The susceptibility map is used to identify the level of potential landslide risk (low, moderate, or high). In Dominica, these maps are produced using optical satellite and aerial images, digital elevation models, and historic landslide inventory data. This study illustrates the benefits of using satellite Interferometric Synthetic Aperture Radar (InSAR) to refine these maps. Our study shows that when using continuous high-resolution InSAR data, active slopes can be identified and monitored. This information can be used to highlight areas most at risk (for use in validating and updating the susceptibility map), and can constrain the time of occurrence of when the landslide was initiated (for use in landslide inventory mapping). Our study shows that InSAR can be used to assist in the investigation of pre-failure slope conditions. For instance, our initial findings suggest there is more land motion prior to failure on clay soils with gentler slopes than on those with steeper slopes. A greater understanding of pre-failure slope conditions will support the generation of a more dependable susceptibility map. Our study also discusses the integration of InSAR deformation-rate maps and time-series analysis with rainfall data in support of the development of rainfall thresholds for different terrains. The information provided by InSAR can enhance inventory and susceptibility mapping, which will better assist with the island’s current disaster mitigation and resiliency efforts. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Landslides triggered from Tropical Storm Erika in the village of Petite Savanne. (<b>a</b>) is from [<a href="#B12-remotesensing-13-00815" class="html-bibr">12</a>], while (<b>b</b>–<b>d</b>) are from [<a href="#B13-remotesensing-13-00815" class="html-bibr">13</a>]. The roads have been highlighted with red-dashed lines to illustrate the impact on the transportation infrastructure.</p>
Full article ">Figure 2
<p>(<b>a</b>) Geological age, (<b>b</b>) deposits, (<b>c</b>) elevation, and (<b>d</b>) annual rainfall amounts of Dominica overlaying a Digital Elevation Model (DEM) derived shaded relief. (<b>a</b>–<b>c</b>) were produced from data provided from [<a href="#B42-remotesensing-13-00815" class="html-bibr">42</a>], while (<b>d</b>) was developed from data provided from [<a href="#B43-remotesensing-13-00815" class="html-bibr">43</a>].</p>
Full article ">Figure 3
<p>DEM and RADARSAT-2 (RS2) ascending and descending (black rectangles) coverage.</p>
Full article ">Figure 4
<p>(<b>a</b>) Line-of-sight (LOS) descending deformation-rate map over Soufriere Village, overlain on Google Earth. (<b>b</b>) InSAR time-series overlaid on monthly rainfall measurements (obtained from [<a href="#B56-remotesensing-13-00815" class="html-bibr">56</a>]) showing differential rates of landslide motion. (<b>c</b>) Susceptibility map overlain on a DEM with landslide inventory data over the same area. (<b>c</b>) was produced from data provided through [<a href="#B23-remotesensing-13-00815" class="html-bibr">23</a>] and the rainfall data were from [<a href="#B56-remotesensing-13-00815" class="html-bibr">56</a>].</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Line-of-sight (LOS) descending deformation-rate map over Soufriere Village, overlain on Google Earth. (<b>b</b>) InSAR time-series overlaid on monthly rainfall measurements (obtained from [<a href="#B56-remotesensing-13-00815" class="html-bibr">56</a>]) showing differential rates of landslide motion. (<b>c</b>) Susceptibility map overlain on a DEM with landslide inventory data over the same area. (<b>c</b>) was produced from data provided through [<a href="#B23-remotesensing-13-00815" class="html-bibr">23</a>] and the rainfall data were from [<a href="#B56-remotesensing-13-00815" class="html-bibr">56</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) Pre Hurricane Maria LOS ascending deformation-rate map overlaying the soil types. (<b>b</b>) Debris Flows (DF), Debris Slide (DS), and flood/debris flow channels (SS) triggered by Hurricane Maria overlaying the annual rainfall map. DS appears to be the most dominant landslide type over the DF and SS. InSAR shows more land motion prior to failure on the clay soil with gentler slopes than steeper slopes. (<b>c</b>,<b>d</b>) are the slope details and time-series of the 10 points highlighted in (<b>a</b>,<b>b</b>), respectively. Soil type and annual rainfall data provided by [<a href="#B42-remotesensing-13-00815" class="html-bibr">42</a>,<a href="#B56-remotesensing-13-00815" class="html-bibr">56</a>], respectively.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Pre Hurricane Maria LOS ascending deformation-rate map overlaying the soil types. (<b>b</b>) Debris Flows (DF), Debris Slide (DS), and flood/debris flow channels (SS) triggered by Hurricane Maria overlaying the annual rainfall map. DS appears to be the most dominant landslide type over the DF and SS. InSAR shows more land motion prior to failure on the clay soil with gentler slopes than steeper slopes. (<b>c</b>,<b>d</b>) are the slope details and time-series of the 10 points highlighted in (<b>a</b>,<b>b</b>), respectively. Soil type and annual rainfall data provided by [<a href="#B42-remotesensing-13-00815" class="html-bibr">42</a>,<a href="#B56-remotesensing-13-00815" class="html-bibr">56</a>], respectively.</p>
Full article ">Figure 6
<p>(<b>a</b>) Landslides triggered by Tropical Storm Erika [<a href="#B42-remotesensing-13-00815" class="html-bibr">42</a>] integrated with post-Tropical Storm Erika ascending LOS deformation-rate map. (<b>b</b>) is the deformation time-series of the points. Points X3 and X5 show active landslides after Tropical Storm Erika. (<b>c</b>) provides the slope details of points X1 to X5 highlighted in (<b>a</b>).</p>
Full article ">Figure 7
<p>MSBAS time-series and deformation-rate maps of the components of motion in the (<b>a</b>) up/down (U/D) and (<b>b</b>) east/west (E/W) directions overlain on a SAR image. (<b>a</b>,<b>b</b>) also show up/down and east/west differential deformation rates, respectively.</p>
Full article ">Figure 7 Cont.
<p>MSBAS time-series and deformation-rate maps of the components of motion in the (<b>a</b>) up/down (U/D) and (<b>b</b>) east/west (E/W) directions overlain on a SAR image. (<b>a</b>,<b>b</b>) also show up/down and east/west differential deformation rates, respectively.</p>
Full article ">
36 pages, 3939 KiB  
Article
Fusion of Airborne LiDAR Point Clouds and Aerial Images for Heterogeneous Land-Use Urban Mapping
by Yasmine Megahed, Ahmed Shaker and Wai Yeung Yan
Remote Sens. 2021, 13(4), 814; https://doi.org/10.3390/rs13040814 - 23 Feb 2021
Cited by 17 | Viewed by 4493
Abstract
The World Health Organization has reported that the number of worldwide urban residents is expected to reach 70% of the total world population by 2050. In the face of challenges brought about by the demographic transition, there is an urgent need to improve [...] Read more.
The World Health Organization has reported that the number of worldwide urban residents is expected to reach 70% of the total world population by 2050. In the face of challenges brought about by the demographic transition, there is an urgent need to improve the accuracy of urban land-use mappings to more efficiently inform about urban planning processes. Decision-makers rely on accurate urban mappings to properly assess current plans and to develop new ones. This study investigates the effects of including conventional spectral signatures acquired by different sensors on the classification of airborne LiDAR (Light Detection and Ranging) point clouds using multiple feature spaces. The proposed method applied three machine learning algorithms—ML (Maximum Likelihood), SVM (Support Vector Machines), and MLP (Multilayer Perceptron Neural Network)—to classify LiDAR point clouds of a residential urban area after being geo-registered to aerial photos. The overall classification accuracy passed 97%, with height as the only geometric feature in the classifying space. Misclassifications occurred among different classes due to independent acquisition of aerial and LiDAR data as well as shadow and orthorectification problems from aerial images. Nevertheless, the outcomes are promising as they surpassed those achieved with large geometric feature spaces and are encouraging since the approach is computationally reasonable and integrates radiometric properties from affordable sensors. Full article
(This article belongs to the Special Issue Aerial LiDAR Applications in Urban Environments)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Resampling of classified points: turquoise, black, and red represent points of three different classes, and grey points represent unclassified resampled points. (<b>a</b>) Extent of study area. (<b>b</b>) Pixel-size identification. (<b>c</b>) Point resampling into the grid.</p>
Full article ">Figure 2
<p>Conceptual overview of the proposed methodology.</p>
Full article ">Figure 3
<p>LiDAR (Light Detection and Ranging) ground filtering different approaches.</p>
Full article ">Figure 4
<p>Schematic structure of a simple MLP (Multilayer Perceptron Neural Network).</p>
Full article ">Figure 5
<p>Methods of assessment of machine learning classification algorithms (ROC: Receiver Operating Characteristic).</p>
Full article ">Figure 6
<p>Study region. (<b>a</b>) Location of the study zone in Toronto. (<b>b</b>) Study area zoomed-in.</p>
Full article ">Figure 7
<p>LiDAR (Light Detection and Ranging) data by height values.</p>
Full article ">Figure 8
<p>Ground filtering of LiDAR data. (<b>a</b>) Ground points. (<b>b</b>) Non-ground points.</p>
Full article ">Figure 9
<p>Distribution of training data. (<b>a</b>) Ground samples. (<b>b</b>) Non-ground samples.</p>
Full article ">Figure 9 Cont.
<p>Distribution of training data. (<b>a</b>) Ground samples. (<b>b</b>) Non-ground samples.</p>
Full article ">Figure 10
<p>Breakdown of training data. (<b>a</b>) Total samples. (<b>b</b>) Ground samples. (<b>c</b>) Non-ground samples.</p>
Full article ">Figure 11
<p>Examples of geo-registration corrections: black ovals represent error locations. (<b>a</b>) Sidewalk displacement before corrections: aerial photo as basemap. (<b>b</b>) Sidewalk correct positioning: aerial photo as basemap. (<b>c</b>) Buildings/grass misalignment before corrections: NIR (Near Infrared), G (Green), and B (Blue) visualization. (<b>d</b>) Buildings/grass alignment after corrections: NIR, G, and B visualization.</p>
Full article ">Figure 12
<p>Classification results on validation and test sets (ML: Maximum Likelihood, SVM: Support Vector Machine, H: Height, I: Intensity, RGB: Red Green Blue, NDVI: Normalized Difference Vegetation Index).</p>
Full article ">Figure 13
<p><span class="html-italic">K</span>-fold cross-validation results of MLP neural network classification.</p>
Full article ">Figure 14
<p>Classification accuracy (test set). (<b>a</b>) Overall classification. (<b>b</b>) Ground accuracy. (<b>c</b>) Non-ground accuracy.</p>
Full article ">Figure 15
<p>Processing time.</p>
Full article ">Figure 16
<p>Class accuracy (test set): scenario-3. (<b>a</b>) ML classification. (<b>b</b>) SVM classification. (<b>c</b>) MLP neural network classification.</p>
Full article ">Figure 17
<p>Final LiDAR mapping: black ovals represent examples of misclassification (A: high vegetation as buildings, B: light asphalt as dark asphalt, C: buildings as high vegetation).</p>
Full article ">Figure 18
<p>MLP neural network learning curves: scenario 3. (<b>a</b>) Ground training. (<b>b</b>) Non-ground training.</p>
Full article ">Figure 19
<p>Radiometric characteristics of vehicles: green ovals represent vehicle locations—NIR, G, and B visualization.</p>
Full article ">Figure 20
<p>Examples of misclassification by external factors (A: different acquisition time, B: shadow, C: inexact orthorectification) on corresponding aerial photo: the zoom-in shows a height LiDAR-image overlay in transparent orange layer.</p>
Full article ">
24 pages, 17600 KiB  
Article
Rangeland Fractional Components Across the Western United States from 1985 to 2018
by Matthew Rigge, Collin Homer, Hua Shi, Debra Meyer, Brett Bunde, Brian Granneman, Kory Postma, Patrick Danielson, Adam Case and George Xian
Remote Sens. 2021, 13(4), 813; https://doi.org/10.3390/rs13040813 - 23 Feb 2021
Cited by 27 | Viewed by 9739
Abstract
Monitoring temporal dynamics of rangelands to detect and understand change in vegetation cover and composition provides a wealth of information to improve management and sustainability. Remote sensing allows the evaluation of both abrupt and gradual rangeland change at unprecedented spatial and temporal extents. [...] Read more.
Monitoring temporal dynamics of rangelands to detect and understand change in vegetation cover and composition provides a wealth of information to improve management and sustainability. Remote sensing allows the evaluation of both abrupt and gradual rangeland change at unprecedented spatial and temporal extents. Here, we describe the production of the National Land Cover Database (NLCD) Back in Time (BIT) dataset which quantified the percent cover of rangeland components (bare ground, herbaceous, annual herbaceous, litter, shrub, and sagebrush (Artemisia spp. Nutt.) across the western United States using Landsat imagery from 1985 to 2018. We evaluate the relationships of component trends with climate drivers at an ecoregion scale, describe the nature of landscape change, and demonstrate several case studies related to changes in grazing management, prescribed burns, and vegetation treatments. Our results showed the net cover of shrub, sagebrush, and litter significantly (p < 0.01) decreased, bare ground and herbaceous cover had no significant change, and annual herbaceous cover significantly (p < 0.05) increased. Change was ubiquitous, with a mean of 92% of pixels with some change and 38% of pixels with significant change (p < 0.10). However, most change was gradual, well over half of pixels have a range of less than 10%, and most change occurred outside of known disturbances. The BIT data facilitate a comprehensive assessment of rangeland condition, evaluation of past management actions, understanding of system variability, and opportunities for future planning. Full article
Show Figures

Figure 1

Figure 1
<p>Environmental Protection Agency Level III ecoregions.</p>
Full article ">Figure 2
<p>Flow chart of major processing steps for Back-in-Time (BIT) data production. Cubist Regression Tree (RT) models were used to generate spatial outputs.</p>
Full article ">Figure 3
<p>Temporal slope of component cover from 1985 to 2018 calculated using linear models developed for each pixel.</p>
Full article ">Figure 4
<p>Fractional component cover averaged yearly across the study area (black line) and in areas unburned across the study period (gray line) (<b>A</b>–<b>F</b>). Mean growing season (April–September, black line) and non-growing season (October–March, gray line) climate variables for 1985–2018 (<b>G</b>–<b>I</b>). Significance is indicated as *, **, and *** for significance levels <span class="html-italic">p</span> &lt; 0.10, <span class="html-italic">p</span> &lt; 0.05, and <span class="html-italic">p</span> &lt; 0.01, respectively.</p>
Full article ">Figure 5
<p>Component cover trends from 1985 to 2018 in rangeland pixels averaged by Environmental Protection Agency Level III ecoregions.</p>
Full article ">Figure 6
<p>Growing season (April–September) and non-growing season (October–March) climate variable trends from 1985 to 2018 in rangeland pixels averaged by Environmental Protection Agency Level III ecoregions.</p>
Full article ">Figure 7
<p>Component change related to experimental prescribed (<span class="html-italic">Rx</span>) burn and control treatments conducted by Ellsworth et al. [<a href="#B21-remotesensing-13-00813" class="html-bibr">21</a>] and altered management in Hart Mountain National Antelope Refuge, Oregon related to the removal of domestic livestock and wild horses [<a href="#B22-remotesensing-13-00813" class="html-bibr">22</a>]. (<b>A</b>) Landsat Image from 8 August 1997, (<b>B</b>) Landsat image from 12 July 2018, with inset Google Earth image from 5 September 2014, showing the heterogeneity in a burn severity in a treatment pasture 17 years after fire, and a second inset with the case study location, (<b>C</b>) temporal slope of bare ground from 1985 to 2018, inset is plot photo from 15 August 1990, (<b>D</b>) temporal slope of shrub cover from 1985 to 2018, inset is plot photo from 8 August 2013, with text below showing change in fractional component cover in the pixel representing the photo location, (<b>E</b>) mean annual herbaceous and shrub cover in <span class="html-italic">Rx</span> burn and treatment pastures. For <b>C</b> and <b>D</b>, refer to <a href="#remotesensing-13-00813-f003" class="html-fig">Figure 3</a> for interpretation of slope symbology. Photo credit [<a href="#B22-remotesensing-13-00813" class="html-bibr">22</a>].</p>
Full article ">Figure 8
<p>Component cover changes related to woody encroachment by honey mesquite, vegetation treatments, and energy development in southeast New Mexico. (<b>A</b>) Temporal slope of shrub cover from 1985 to 2018, with Bureau of Land Management (BLM) vegetation treatments (herbicide applications) outlined in black and year of treatment indicated. Inset map shows location of case study within the study area. Stars indicate the location of corresponding plot photos for a site treated with herbicide in 2006 and one without a treatment history, left and right, respectively. Photos include the fractional cover change from 1985 to 2018 in each plot. Photos taken on August 20, 2016, credit M. Rigge, (<b>B</b>) temporal slope of bare ground cover from 1985 to 2018. For A and B, refer to <a href="#remotesensing-13-00813-f003" class="html-fig">Figure 3</a> for interpretation of temporal slope symbology.</p>
Full article ">Figure 9
<p>Examples of the impacts of wildfire (timing indicated with vertical red line) on component cover values and the trajectories of recovery following burns in southwest Idaho and north central Nevada (location shown on inset map). The impacts of fire were plotted by calculating the departure from the time-series mean component cover in each burned area separately, where positive values indicate higher component cover than the long term mean in that year. Data reflect the mean condition within each burned area.</p>
Full article ">Figure 10
<p>Change Fraction (CF) sensitivity to fractional component change in WRS path/row 37/31 (southwest Wyoming). Mean of the absolute fractional component change between base and target years by CF confidence level. Data are from across unburned portions of the path/row and span all target years from 1985 to 2018. CF values less than 30 are not displayed because no labeled change can occur below this threshold.</p>
Full article ">
33 pages, 7883 KiB  
Article
Multi-Feature Fusion for Weak Target Detection on Sea-Surface Based on FAR Controllable Deep Forest Model
by Jiahuan Zhang and Hongjun Song
Remote Sens. 2021, 13(4), 812; https://doi.org/10.3390/rs13040812 - 23 Feb 2021
Cited by 7 | Viewed by 2764
Abstract
Target detection on the sea-surface has always been a high-profile problem, and the detection of weak targets is one of the most difficult problems and the key issue under this problem. Traditional techniques, such as imaging, cannot effectively detect these types of targets, [...] Read more.
Target detection on the sea-surface has always been a high-profile problem, and the detection of weak targets is one of the most difficult problems and the key issue under this problem. Traditional techniques, such as imaging, cannot effectively detect these types of targets, so researchers choose to start by mining the characteristics of the received echoes and other aspects for target detection. This paper proposes a false alarm rate (FAR) controllable deep forest model based on six-dimensional feature space for efficient and accurate detection of weak targets on the sea-surface. This is the first attempt at the deep forest model in this field. The validity of the model was verified on IPIX data, and the detection probability was compared with other proposed methods. Under the same FAR condition, the average detection accuracy rate of the proposed method could reach over 99.19%, which is 9.96% better than the results of the current most advanced method (K-NN FAR-controlled Detector). Experimental results show that multi-feature fusion and the use of a suitable detection framework have a positive effect on the detection of weak targets on the sea-surface. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The collection location and a picture of the Intelligent Pixel Processing X-band (IPIX) radar. (<b>a</b>) Topo map of IPIX radar data collecting site in 1993, the latitude and longitude coordinates of the collection site are 44°36.72′N and 63°25.41′W, respectively; (<b>b</b>) IPIX radar picture.</p>
Full article ">Figure 2
<p>Average signal–clutter ratio (SCR) of 14 sets of echo data.</p>
Full article ">Figure 3
<p>Relative mean amplitude (RMA) feature of the first dataset. (<b>a</b>) The amplitude of primary cell where target exists and the average amplitude of clutter-only cell; (<b>b</b>) The histogram of RMA at the primary cell where target exists and randomly selected clutter-only cell.</p>
Full article ">Figure 4
<p>Relative information entropy in the time domain (RIET) feature of the first dataset. (<b>a</b>) the histogram of RIET at the primary cell where target exists and randomly selected clutter-only cell; (<b>b</b>) the average RIET of radar echo in each range cell.</p>
Full article ">Figure 5
<p>Frequency spectrogram of sea clutter and target signals from different echoes. (<b>a</b>) is the spectrogram of data names 19931107_135603_starea; (<b>b</b>) is the spectrogram of data names 19931111_163625_starea.</p>
Full article ">Figure 6
<p>Relative value of Doppler peak height (RPH) feature of the first dataset. (<b>a</b>) the histogram of RPH at the primary cell where target exists and randomly selected clutter-only cell; (<b>b</b>) the average RPH of radar echo in each range cell.</p>
Full article ">Figure 7
<p>RIED feature of the first dataset. (<b>a</b>) the histogram of RIED at the primary cell where target exists and randomly selected clutter-only cell; (<b>b</b>) the average RIED of radar echo in each range cell.</p>
Full article ">Figure 8
<p>Time–frequency character figures of target echo and sea clutter. (<b>a</b>) Choi–Williams distribution (CWD) of target echo; (<b>b</b>) CWD of sea clutter; (<b>c</b>) N-CWD(Normalized–CWD) of target echo; (<b>d</b>) N–CWD of sea clutter.</p>
Full article ">Figure 9
<p>Important time–frequency points (ITFP) figures of target echo and sea clutter. (<b>a</b>) ITFP of target echo; (<b>b</b>) ITFP of sea clutter.</p>
Full article ">Figure 10
<p>Representation of one-dimensional characteristics on targets and clutter. (<b>a</b>) RMA; (<b>b</b>) RIET; (<b>c</b>) RPH; (<b>d</b>) RIED.</p>
Full article ">Figure 11
<p>Representation of two-dimensional characteristics on targets and clutter. (<b>a</b>) RMA—RIET; (<b>b</b>) RMA—RPH; (<b>c</b>) RMA—RIED; (<b>d</b>) RIET—RIED; (<b>e</b>) RPH—RIED; (<b>f</b>) RIET—RPH.</p>
Full article ">Figure 12
<p>Representation of three-dimensional characteristics on targets and clutter. (<b>a</b>) RMA—RPH—RIET; (<b>b</b>) RPH—RIET—RIED; (<b>c</b>) RIET—RMA—RIED; (<b>d</b>) RPH—RMA—RIED.</p>
Full article ">Figure 13
<p>Characteristic correlation analysis of target and clutter. (<b>a</b>) characteristic correlation of target; (<b>b</b>) characteristic correlation of clutter.</p>
Full article ">Figure 14
<p>Flow diagram of deep forest.</p>
Full article ">Figure 15
<p>Cascade layer and time cost in the multi-grained scanning layer.</p>
Full article ">Figure 16
<p>The influence of gcForest on detection probability of different polarization, red points in charts are the points with the highest detection probability. (<b>a</b>) HH polarization; (<b>b</b>) HV polarization; (<b>c</b>) VH polarization; (<b>d</b>) VV polarization.</p>
Full article ">Figure 17
<p>The influence of parameters on training accuracy and time-consuming (<b>a</b>) performance of ExtraTreeClassifier; (<b>b</b>) performance of RandomForestClassifier; (<b>c</b>) performance of XGBClassifier; (<b>d</b>) performance of LogisticRegression; (<b>e</b>) performance of SGDClassifier; (<b>f</b>) performance of synthesis model.</p>
Full article ">Figure 17 Cont.
<p>The influence of parameters on training accuracy and time-consuming (<b>a</b>) performance of ExtraTreeClassifier; (<b>b</b>) performance of RandomForestClassifier; (<b>c</b>) performance of XGBClassifier; (<b>d</b>) performance of LogisticRegression; (<b>e</b>) performance of SGDClassifier; (<b>f</b>) performance of synthesis model.</p>
Full article ">Figure 18
<p>The influence of gcForest on detection probability of different polarization, red points in charts are the points with the highest detection probability. (<b>a</b>) HH polarization; (<b>b</b>) HV polarization; (<b>c</b>) VH polarization; (<b>d</b>) VV polarization.</p>
Full article ">
28 pages, 11833 KiB  
Article
Intercomparison of Global Sea Surface Salinity from Multiple Datasets over 2011–2018
by Hao Liu and Zexun Wei
Remote Sens. 2021, 13(4), 811; https://doi.org/10.3390/rs13040811 - 23 Feb 2021
Cited by 11 | Viewed by 3357
Abstract
The variability in sea surface salinity (SSS) on different time scales plays an important role in associated oceanic or climate processes. In this study, we compare the SSS on sub-annual, annual, and interannual time scales among ten datasets, including in situ-based and satellite-based [...] Read more.
The variability in sea surface salinity (SSS) on different time scales plays an important role in associated oceanic or climate processes. In this study, we compare the SSS on sub-annual, annual, and interannual time scales among ten datasets, including in situ-based and satellite-based SSS products over 2011–2018. Furthermore, the dominant mode on different time scales is compared using the empirical orthogonal function (EOF). Our results show that the largest spread of ten products occurs on the sub-annual time scale. High correlation coefficients (0.6~0.95) are found in the global mean annual and interannual SSSs between individual products and the ensemble mean. Furthermore, this study shows good agreement among the ten datasets in representing the dominant mode of SSS on the annual and interannual time scales. This analysis provides information on the consistency and discrepancy of datasets to guide future use, such as improvements to ocean data assimilation and the quality of satellite-based data. Full article
(This article belongs to the Special Issue Moving Forward on Remote Sensing of Sea Surface Salinity)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The horizontal distribution of (<b>a</b>) the ensemble mean (<math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">S</mi> <mo>¯</mo> </mover> </semantics></math>), (<b>c</b>) STD (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">S</mi> <mrow> <mi mathvariant="bold-italic">S</mi> <mi mathvariant="bold-italic">T</mi> <mi mathvariant="bold-italic">D</mi> </mrow> </msub> </mrow> </semantics></math> ), and (<b>e</b>) largest spread of the climatological SSS derived from ten products averaged over 2011–2018. (<b>b</b>), (<b>d</b>) and (<b>f</b>) are the same as (<b>a</b>), (<b>c</b>), and (<b>e</b>) but for the variability in SSS estimated by STD over 2011–2018. Blank regions denote that no existing data were available for at least one product.</p>
Full article ">Figure 2
<p>The difference in the climatological mean SSS between each gridded product and the ensemble mean fields. The units for salinity anomalies are g/kg.</p>
Full article ">Figure 3
<p>Same as in <a href="#remotesensing-13-00811-f002" class="html-fig">Figure 2</a> but for salinity variability, which is denoted by one standard deviation of the monthly fields.</p>
Full article ">Figure 4
<p>(<b>a</b>) The STD of the ensemble mean of SSS on the sub-annual time scale. (<b>b</b>) The spread of the STD of the sub-annual SSS time series based on ten products. (<b>c</b>–<b>j</b>) The difference between the STDs of the sub-annual SSS signal from individual products and the ensemble mean. The blue lines in (<b>a</b>–<b>l</b>) denote the major rivers worldwide.</p>
Full article ">Figure 5
<p>Percentage of the total SSS variance explained by sub-annual variability.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>j</b>) The principal components for sub-annual SSS time series and (<b>k</b>) their contributions to the total variability in the sub-annual SSS signal. The dark blue bar in (<b>k</b>) denote EN4, blue represents JAMSTEC, light blue represents IAP, green represents IPRC, yellow represents SIO, orange represents BOA, red represents LOCEAN, purple represents BEC, dark green denotes ECA CCI and bright green denotes CMEMS.</p>
Full article ">Figure 7
<p>The spatial loadings of the first EOF modes for the sub-annual SSS derived from an individual product.</p>
Full article ">Figure 8
<p>(<b>a</b>) The STD of the ensemble mean of annual SSS. (<b>b</b>) The spread of the STD of the annual SSS time series based on ten products. (<b>c</b>–<b>j</b>) The difference between the STD of the annual SSS signal from individual products and the ensemble mean.</p>
Full article ">Figure 9
<p>Percentage of total SSS variance explained by annual variability.</p>
Full article ">Figure 10
<p>(<b>a</b>) The principal components for the annual SSS time series derived from individual product and (<b>b</b>) their contributions to the total variability in the annual SSS signal. The dark blue line/bar in (<b>a</b>) and (<b>b</b>) denotes EN4, blue represents JAMSTEC, light blue represents IAP, green represents IPRC, yellow represents SIO, orange represents BOA, red represents LOCEAN, purple represents BEC, dark green denotes ECA CCI and bright green denotes CMEMS. (<b>c</b>) The ensemble mean for the spatial loadings of the first EOF modes.</p>
Full article ">Figure 11
<p>(<b>a</b>) The STD of the ensemble mean of the interannual SSS. (<b>b</b>) The spread of STD of the interannual SSS time series based on ten products. (<b>c</b>–<b>j</b>) The difference between the STD of the interannual SSS signal from individual products and the ensemble mean.</p>
Full article ">Figure 12
<p>Percentage of the total SSS variance explained by interannual variability.</p>
Full article ">Figure 13
<p>(<b>a</b>) The principal components for the interannual SSS time series and (<b>b</b>) their contributions to the total variability in the interannual SSS signal. The dark blue line/bar in (<b>a</b>) and (<b>b</b>) denotes EN4, blue represents JAMSTEC, light blue represents IAP, green represents IPRC, yellow represents SIO, orange represents BOA, red represents LOCEAN, purple represents BEC, dark green denotes ECA CCI and bright green denotes CMEMS. (<b>c</b>) The ensemble mean for the spatial loadings of the first EOF modes.</p>
Full article ">Figure 14
<p>Horizontal distribution of the leading SSS mode over the global ocean.</p>
Full article ">Figure 15
<p>Taylor diagrams comparing ten SSS products on different time scales averaged over the global ocean. The black circle denotes the ensemble mean, dark blue denotes EN4, blue represents JAMSTEC, light blue represents IAP, green represents IPRC, yellow represents SIO, orange represents BOA, red represents LOCEAN, purple represents BEC, dark green denotes ECA CCI and bright green denotes CMEMS.</p>
Full article ">Figure A1
<p>The horizontal distribution of the (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>) ensemble means, (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) STD, and (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) largest spread of seasonal SSS derived from ten products averaged over 2011–2018.</p>
Full article ">Figure A2
<p>The spatial pattern of the leading mode of the annual SSS variations from ten products. The units are in g/kg.</p>
Full article ">Figure A3
<p>The spatial pattern of the leading mode of the interannual SSS variations from ten products. The units are in g/kg.</p>
Full article ">Figure A4
<p>Time series of global mean SSS anomalies (solid lines) and the residual of the decomposed SSS signal (dotted lines) derived from ten SSS products. The ratio of the residual SSS variance to the total SSS variance is listed in the northeastern corner of each plot. The units for SSS anomalies are g/kg.</p>
Full article ">Figure A5
<p>The ratio of residual SSS variance to the total SSS variance in ten products. The residual SSS is derived from the subtraction of the original SSS signal from the sum of sub-annual, annual and interannual SSS time series. The units are %.</p>
Full article ">
24 pages, 10819 KiB  
Article
Beamforming of LOFAR Radio-Telescope for Passive Radiolocation Purposes
by Aleksander Droszcz, Konrad Jędrzejewski, Julia Kłos, Krzysztof Kulpa and Mariusz Pożoga
Remote Sens. 2021, 13(4), 810; https://doi.org/10.3390/rs13040810 - 23 Feb 2021
Cited by 14 | Viewed by 3026
Abstract
This paper presents the results of investigations on the beamforming of a low-frequency radio-telescope LOFAR which can be used as a receiver in passive coherent location (PCL) radars for aerial and space object detection and tracking. The use of a LOFAR radio-telescope for [...] Read more.
This paper presents the results of investigations on the beamforming of a low-frequency radio-telescope LOFAR which can be used as a receiver in passive coherent location (PCL) radars for aerial and space object detection and tracking. The use of a LOFAR radio-telescope for the passive tracking of space objects can be a highly cost-effective solution due to the fact that most of the necessary equipment needed for passive radiolocation already exists in the form of LOFAR stations. The capability of the radiolocation of planes by a single LOFAR station in Borowiec is considered to be ‘proof of concept’ for future research focused on the localization of space objects. Beam patterns of single sets of LOFAR antennas (known as tiles), as well as for the entire LOFAR station, are presented and thoroughly discussed in the paper. Issues related to grating lobes in LOFAR beam patterns are also highlighted. A beamforming algorithm used for passive radiolocation purposes, exploiting data collected by a LOFAR station, is also discussed. The results of preliminary experiments carried out with real signals collected by the LOFAR station in Borowiec, Poland confirm that the appropriate beamforming can significantly increase the radar’s detection range, as well as the detection’s certainty. Full article
(This article belongs to the Special Issue Selected Papers of Microwave and Radar Week (MRW 2020))
Show Figures

Figure 1

Figure 1
<p>Locations of existing and planned LOw-Frequency ARray (LOFAR) stations in Europe. Credit: Netherlands Institute for Radio Astronomy ASTRON.</p>
Full article ">Figure 2
<p>LOFAR PL610 station, Borowiec.</p>
Full article ">Figure 3
<p>Distribution of LOFAR tiles in the LOFAR station in Borowiec.</p>
Full article ">Figure 4
<p>Plane wave approaching a LOFAR tile.</p>
Full article ">Figure 5
<p>Approximation of the elevation beam pattern for a single high-frequency band antenna (HBA), for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>240</mn> <mspace width="3.33333pt"/> <mi>MHz</mi> </mrow> </semantics></math> created with a 4th order polynomial.</p>
Full article ">Figure 6
<p>Unsteered beam pattern for a LOFAR tile.</p>
Full article ">Figure 7
<p>Vertical cut at <math display="inline"><semantics> <msup> <mn>45</mn> <mo>°</mo> </msup> </semantics></math> azimuth of the unsteered beam pattern for the LOFAR tile.</p>
Full article ">Figure 8
<p>Beam pattern of the LOFAR tile with the beam steered at <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mrow> <mi>t</mi> <mi>i</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>8</mn> <mo>°</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>t</mi> <mi>i</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>20</mn> <mo>°</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Vertical cut at <math display="inline"><semantics> <msup> <mn>8</mn> <mo>°</mo> </msup> </semantics></math> azimuth of the beam pattern of the LOFAR tile with the beam steered at <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mrow> <mi>t</mi> <mi>i</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>8</mn> <mo>°</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>t</mi> <mi>i</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>20</mn> <mo>°</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Horizontal cut at <math display="inline"><semantics> <msup> <mn>20</mn> <mo>°</mo> </msup> </semantics></math> elevation of the beam pattern of the LOFAR tile with the beam steered at <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mrow> <mi>t</mi> <mi>i</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>8</mn> <mo>°</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>t</mi> <mi>i</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>20</mn> <mo>°</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Unsteered beam pattern of a LOFAR station.</p>
Full article ">Figure 12
<p>Vertical cut at <math display="inline"><semantics> <msup> <mn>45</mn> <mo>°</mo> </msup> </semantics></math> azimuth of the unsteered beam pattern of the LOFAR station.</p>
Full article ">Figure 13
<p>Beam pattern of the LOFAR station with the beam steered at <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>8</mn> <mo>°</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>20</mn> <mo>°</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>Vertical cut at <math display="inline"><semantics> <msup> <mn>8</mn> <mo>°</mo> </msup> </semantics></math> azimuth of the beam pattern of the LOFAR station with the beam steered at <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <msup> <mn>8</mn> <mo>°</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <msup> <mn>20</mn> <mo>°</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>Horizontal cut at <math display="inline"><semantics> <msup> <mn>20</mn> <mo>°</mo> </msup> </semantics></math> elevation of the beam pattern of the LOFAR station with the beam steered at <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>8</mn> <mo>°</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <msup> <mn>20</mn> <mo>°</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Horizontal cuts at <math display="inline"><semantics> <msup> <mn>20</mn> <mo>°</mo> </msup> </semantics></math> elevation of the beam patterns of the LOFAR station with the digital beams steered at different <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>20</mn> <mo>°</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Vertical cuts at <math display="inline"><semantics> <msup> <mn>8</mn> <mo>°</mo> </msup> </semantics></math> azimuth of the beam patterns of the LOFAR station with the digital beams steered at different <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>8</mn> <mo>°</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Detected target cross-section in dBsm versus integration time and orbit height.</p>
Full article ">Figure 19
<p>Map with planes around the LOFAR station in Borowiec at the moment of the registration, <span class="html-italic">L</span>—the LOFAR station in Borowiec, <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math>—the broadcasting station in Piatkowo, <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math>—the broadcasting station in Srem.</p>
Full article ">Figure 20
<p>Cross-ambiguity function for the surveillance signal acquired by one tile.</p>
Full article ">Figure 21
<p>Cross-ambiguity function for the beam steered in the direction of the <span class="html-italic">SWR160</span> plane.</p>
Full article ">Figure 22
<p>Two ellipses computed based on the <span class="html-italic">SWR160</span> plane’s bistatic ranges from two transmitter stations.</p>
Full article ">Figure 23
<p>Cross-ambiguity function for the beam steered in the direction of the <span class="html-italic">RYR2XJ</span> plane.</p>
Full article ">Figure 24
<p>Two ellipses computed based on the <span class="html-italic">RYR2XJ</span> plane’s bistatic ranges from two transmitter stations.</p>
Full article ">
18 pages, 9708 KiB  
Article
Mapping Woody Volume of Mediterranean Forests by Using SAR and Machine Learning: A Case Study in Central Italy
by Emanuele Santi, Marta Chiesi, Giacomo Fontanelli, Alessandro Lapini, Simonetta Paloscia, Simone Pettinato, Giuliano Ramat and Leonardo Santurri
Remote Sens. 2021, 13(4), 809; https://doi.org/10.3390/rs13040809 - 23 Feb 2021
Cited by 5 | Viewed by 2550
Abstract
In this paper, multi-frequency synthetic aperture radar (SAR) data at L- and C-bands (ALOS PALSAR and Envisat/ASAR) were used to estimate forest biomass in Tuscany, in Central Italy. The ground measurements of woody volume (WV, in m3/ha), which can be considered [...] Read more.
In this paper, multi-frequency synthetic aperture radar (SAR) data at L- and C-bands (ALOS PALSAR and Envisat/ASAR) were used to estimate forest biomass in Tuscany, in Central Italy. The ground measurements of woody volume (WV, in m3/ha), which can be considered as a proxy of forest biomass, were retrieved from the Italian National Forest Inventory (NFI). After a preliminary investigation to assess the sensitivity of backscatter at C- and L-bands to forest biomass, an approach based on an artificial neural network (ANN) was implemented. The ANN was trained using the backscattering coefficient at L-band (ALOS PALSAR, HH and HV polarization) and C-band (Envisat ASAR in HH polarization) as inputs. Spatially distributed WV values for the entire test area were derived by the integration (fusion) of a canopy height map derived from the Ice, Cloud, and Land Elevation Geoscience Laser Altimeter System (ICESat GLAS) and the NFI data, in order to build a significant ground truth dataset for the training stage. The analysis of the backscattering sensitivity to WV showed a moderate correlation at L-band and was almost negligible at C-band. Despite this, the ANN algorithm was able to exploit the synergy of SAR frequencies and polarizations, estimating WV with average Pearson’s correlation coefficient (R) = 0.96 and root mean square error (RMSE) ≃ 39 m3/ha when applied to the test dataset and average R = 0.86 and RMSE ≃ 75 m3/ha when validated on the direct measurements from the NFI. Considering the heterogeneity of the scenario (Mediterranean mixed forests in hilly landscape) and the small amount of available ground measurements with respect to the spatial variability of different plots, the obtained results can be considered satisfactory. Moreover, the successful use of WV from global maps for implementing the algorithm suggests the possibility to apply the algorithm to wider areas or even to global scales. Full article
Show Figures

Figure 1

Figure 1
<p>Position of Tuscany (boundaries in red line) in Central Italy with the superimposition of the woody volume (WV) measurements (white dots) available for the area covered by synthetic aperture radar (SAR) images.</p>
Full article ">Figure 2
<p>(<b>a</b>) The WV map obtained by integrating Ice, Cloud, and Land Elevation Geoscience Laser Altimeter System (ICESat GLAS) canopy height and Italian National Forest Inventory (NFI) measurements, (<b>b</b>) the NFI measurements, (<b>c</b>) validation of the GLAS WV map obtained by comparison with ground truth measurements.</p>
Full article ">Figure 3
<p>PALSAR (blue) and ASAR (red) data coverage of Tuscany.</p>
Full article ">Figure 4
<p>Flowchart of the artificial neural network (ANN) training, testing, and validation process.</p>
Full article ">Figure 5
<p>Temporal trends of backscatter at L-band for different intervals of forest biomass: (<b>a</b>) HH polarization, (<b>b</b>) HV polarization. Temporal trends of rainfall acquired by meteorological stations located in north, center, and south of Tuscany are also reported in (<b>a</b>). Each bar represents the cumulative rainfall of the previous 7 days.</p>
Full article ">Figure 6
<p>Regression plots between the woody volume and the backscatter (dB) at L-band for: (<b>a</b>) HH polarization, (<b>b</b>) HV polarization.</p>
Full article ">Figure 7
<p>Backscatter (σ°) behavior as a function of woody volume (WV): (<b>a</b>) PALSAR HH polarization, (<b>b</b>) PALSAR HV polarization, (<b>c</b>) ASAR VV polarization.</p>
Full article ">Figure 8
<p>Results obtained for the date 2008-09-09: (<b>a</b>) the test result based on the 99% of the data not involved in the training, represented as density plot of ANN estimated vs. target WV, (<b>b</b>) the ANN validation on the NFI dataset, (<b>c</b>) the GLAS WV map considered as target, (<b>d</b>) the WV map generated by the ANN, and (<b>e</b>) the absolute error map.</p>
Full article ">Figure 8 Cont.
<p>Results obtained for the date 2008-09-09: (<b>a</b>) the test result based on the 99% of the data not involved in the training, represented as density plot of ANN estimated vs. target WV, (<b>b</b>) the ANN validation on the NFI dataset, (<b>c</b>) the GLAS WV map considered as target, (<b>d</b>) the WV map generated by the ANN, and (<b>e</b>) the absolute error map.</p>
Full article ">
41 pages, 1773 KiB  
Review
Deep Learning-Based Semantic Segmentation of Urban Features in Satellite Images: A Review and Meta-Analysis
by Bipul Neupane, Teerayut Horanont and Jagannath Aryal
Remote Sens. 2021, 13(4), 808; https://doi.org/10.3390/rs13040808 - 23 Feb 2021
Cited by 140 | Viewed by 16553
Abstract
Availability of very high-resolution remote sensing images and advancement of deep learning methods have shifted the paradigm of image classification from pixel-based and object-based methods to deep learning-based semantic segmentation. This shift demands a structured analysis and revision of the current status on [...] Read more.
Availability of very high-resolution remote sensing images and advancement of deep learning methods have shifted the paradigm of image classification from pixel-based and object-based methods to deep learning-based semantic segmentation. This shift demands a structured analysis and revision of the current status on the research domain of deep learning-based semantic segmentation. The focus of this paper is on urban remote sensing images. We review and perform a meta-analysis to juxtapose recent papers in terms of research problems, data source, data preparation methods including pre-processing and augmentation techniques, training details on architectures, backbones, frameworks, optimizers, loss functions and other hyper-parameters and performance comparison. Our detailed review and meta-analysis show that deep learning not only outperforms traditional methods in terms of accuracy, but also addresses several challenges previously faced. Further, we provide future directions of research in this domain. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of first author’s affiliation grouped by countries and continents.</p>
Full article ">Figure 2
<p>Evolution of deep learning-based semantic segmentation of urban remote sensing images. (<b>a</b>) The distribution of number of publications into four study targets. (<b>b</b>) The number of publications by year.</p>
Full article ">Figure 3
<p>Distribution of image preparation and augmentation methods used.</p>
Full article ">Figure 4
<p>Overview of the deep learning (DL) architectures employed. The encoder-decoder models like Fully Convolutional Network (FCN), U-Net, SegNet, DeepLab, Hourglass and others are the most commonly employed ones. Many papers have employed more than one of these architectures to later fuse the output feature maps.</p>
Full article ">Figure 5
<p>Overview of the convolutional backbones employed. Out of the papers that mentioned the use of backbones, the most commonly employed are ResNet and Visual Geometry Group (VGG). As many papers used multiple DL architectures, more than one backbone is used by those papers.</p>
Full article ">Figure 6
<p>Distribution of frameworks used to wrap the deep learning models.</p>
Full article ">Figure 7
<p>Distribution of optimizers used to fit the deep learning models.</p>
Full article ">Figure 8
<p>Overview of loss functions used to evaluate the deep learning models. The most commonly used loss in Cross Entropy (CE) loss.</p>
Full article ">Figure 9
<p>Overview of GPUs used to run the deep learning models. All GPUs are from Nvidia.</p>
Full article ">Figure 10
<p>Overview of performance metrics used to evaluate the results of deep learning models.</p>
Full article ">
16 pages, 2973 KiB  
Article
Uncertainty Assessment of the Vertically-Resolved Cloud Amount for Joint CloudSat–CALIPSO Radar–Lidar Observations
by Andrzej Z. Kotarba and Mateusz Solecki
Remote Sens. 2021, 13(4), 807; https://doi.org/10.3390/rs13040807 - 23 Feb 2021
Cited by 7 | Viewed by 2573
Abstract
The joint CloudSat–Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) climatology remains the only dataset that provides a global, vertically-resolved cloud amount statistic. However, data are affected by uncertainty that is the result of a combination of infrequent sampling, and a very narrow, [...] Read more.
The joint CloudSat–Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) climatology remains the only dataset that provides a global, vertically-resolved cloud amount statistic. However, data are affected by uncertainty that is the result of a combination of infrequent sampling, and a very narrow, pencil-like swath. This study provides the first global assessment of these uncertainties, which are quantified using bootstrapped confidence intervals. Rather than focusing on a purely theoretical discussion, we investigate empirical data that span a five-year period between 2006 and 2011. We examine the 2B-Geometric Profiling (GEOPROF)-LIDAR cloud product, at typical spatial resolutions found in global grids (1.0°, 2.5°, 5.0°, and 10.0°), four confidence levels (0.85, 0.90, 0.95, and 0.99), and three time scales (annual, seasonal, and monthly). Our results demonstrate that it is impossible to estimate, for every location, a five-year mean cloud amount based on CloudSat–CALIPSO data, assuming an accuracy of 1% or 5%, a high confidence level (>0.95), and a fine spatial resolution (1°–2.5°). In fact, the 1% requirement was only met by ~6.5% of atmospheric volumes at 1° and 2.5°, while the more tolerant criterion (5%) was met by 22.5% volumes at 1°, or 48.9% at 2.5° resolution. In order for at least 99% of volumes to meet an accuracy criterion, the criterion itself would have to be lowered to ~20% for 1° data, or to ~8% for 2.5° data. Our study also showed that the average confidence interval: decreased four times when the spatial resolution increased from 1° to 10°; doubled when the confidence level increased from 0.85 to 0.99; and tripled when the number of data-months increased from one (monthly mean) to twelve (annual mean). The cloud regime arguably had the most impact on the width of the confidence interval (mean cloud amount and its standard deviation). Our findings suggest that existing uncertainties in the CloudSat–CALIPSO five-year climatology are primarily the result of climate-specific factors, rather than the sampling scheme. Results that are presented in the form of statistics or maps, as in this study, can help the scientific community to improve accuracy assessments (which are frequently omitted), when analyzing existing and future CloudSat–CALIPSO cloud climatologies. Full article
(This article belongs to the Special Issue Active and Passive Remote Sensing of Aerosols and Clouds)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Zone-averaged cloud amount based on CloudSat-CALIPSO data (2006–2011), showing the region of the troposphere considered in this study. White lines represent the upper boundary of cloud levels (low, mid, high), and reflect actual change in tropopause height (rather than the less-realistic fixed altitude).</p>
Full article ">Figure 2
<p>Frequency of CI widths, with respect to the assumed confidence level, and the spatiotemporal averaging strategy. Note the variation in CI widths for annual (0–30%; (<b>A</b>,<b>D</b>,<b>G</b>,<b>J</b>)), seasonal (0–50%; (<b>B</b>,<b>E</b>,<b>H</b>,<b>K</b>)), and monthly (0–70%; (<b>C</b>,<b>F</b>,<b>I</b>,<b>L</b>)) values.</p>
Full article ">Figure 3
<p>Column-averaged CI widths for mean annual cloud amount at all levels (<b>A</b>), low (<b>B</b>), mid (<b>C</b>), and high level (<b>D</b>), along with corresponding change with latitude (boxplots on the right; the bold red line is the mean cloud amount, the thick red line is standard deviation of cloud amount). Statistics for data analyzed at 2.5° spatial resolution, and CL = 0.95. Low/ mid/ high cloud levels are defined in <a href="#remotesensing-13-00807-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 4
<p>Percentage of atmospheric volumes that meet the cloud amount accuracy criterion (approximated by the width of the 95% CI) at annual (<b>A</b>), seasonal (<b>B</b>), and monthly (<b>C</b>) time scale.</p>
Full article ">Figure 5
<p>Column fraction of atmospheric volumes meeting the cloud amount accuracy requirement of 1% (<b>A</b>,<b>D</b>), 5% (<b>B</b>,<b>E</b>), and 10% (<b>C</b>,<b>F</b>), with respect to the spatial resolution: 2.5 degree (<b>A</b>–<b>C</b>), and 1.0 degree (<b>D</b>–<b>F</b>). Accuracy is approximated by the width of the 95% CI. Five-year mean annual cloud amount is considered.</p>
Full article ">
23 pages, 4682 KiB  
Article
Crop Biomass Mapping Based on Ecosystem Modeling at Regional Scale Using High Resolution Sentinel-2 Data
by Liming He, Rong Wang, Georgy Mostovoy, Jane Liu, Jing M. Chen, Jiali Shang, Jiangui Liu, Heather McNairn and Jarrett Powers
Remote Sens. 2021, 13(4), 806; https://doi.org/10.3390/rs13040806 - 22 Feb 2021
Cited by 13 | Viewed by 4561
Abstract
We evaluate the potential of using a process-based ecosystem model (BEPS) for crop biomass mapping at 20 m resolution over the research site in Manitoba, western Canada driven by spatially explicit leaf area index (LAI) retrieved from Sentinel-2 spectral reflectance throughout the entire [...] Read more.
We evaluate the potential of using a process-based ecosystem model (BEPS) for crop biomass mapping at 20 m resolution over the research site in Manitoba, western Canada driven by spatially explicit leaf area index (LAI) retrieved from Sentinel-2 spectral reflectance throughout the entire growing season. We find that overall, the BEPS-simulated crop gross primary production (GPP), net primary production (NPP), and LAI time-series can explain 82%, 83%, and 85%, respectively, of the variation in the above-ground biomass (AGB) for six selected annual crops, while an application of individual crop LAI explains only 50% of the variation in AGB. The linear relationships between the AGB and these three indicators (GPP, NPP and LAI time-series) are rather high for the six crops, while the slopes of the regression models vary for individual crop type, indicating the need for calibration of key photosynthetic parameters and carbon allocation coefficients. This study demonstrates that accumulated GPP and NPP derived from an ecosystem model, driven by Sentinel-2 LAI data and abiotic data, can be effectively used for crop AGB mapping; the temporal information from LAI is also effective in AGB mapping for some crop types. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A diagram of the data processing flow for mapping crop biomass using Sentinel-2 data and the Boreal Ecosystems Productivity Simulator (BEPS).</p>
Full article ">Figure 2
<p>The geographical location of the SMAPVEX16-MB field campaign and distribution of 50 sampling fields overlaid on the crop type map. The study area is located in southern Manitoba, Canada. (<b>a</b>) The location of SMAPVEX16-MB in Canada; (<b>b</b>) the distribution of sampling fields; and (<b>c</b>) the regular location of 16 sampling sites within each field.</p>
Full article ">Figure 3
<p>The correlation between BEPS-simulated crop gross primary production (GPP) and ground-measured above-ground biomass for six crop types (oat, soybean, wheat, corn, canola and black bean) for SMAPVEX16-MB.</p>
Full article ">Figure 4
<p>The relationship between BEPS-simulated crop GPP and ground-measured above-ground biomass for each crop type for SMAPVEX16-MB.</p>
Full article ">Figure 5
<p>The correlation between BEPS-simulated crop net primary production (NPP) and ground-measured above-ground biomass for six crop types (oat, soybean, wheat, corn, canola and black bean) for SMAPVEX16-MB.</p>
Full article ">Figure 6
<p>The relationship between BEPS-simulated crop NPP and ground-measured above-ground biomass for each crop type for SMAPVEX16-MB.</p>
Full article ">Figure 7
<p>Relationship between crop accumulated leaf area index (LAI) and ground-measured above-ground biomass for six crop types (oat, soybean, wheat, corn, canola and black bean) for SMAPVEX16-MB.</p>
Full article ">Figure 8
<p>The relationship between crop accumulated LAI and ground-measured above-ground biomass for each crop type for SMAPVEX16-MB.</p>
Full article ">Figure 9
<p>Annual crop GPP in 2016 for the SMAPVEX16-MB study area simulated by BEPS. The used Sentinel-2 images are shown in <a href="#remotesensing-13-00806-t001" class="html-table">Table 1</a>. shows the averaged AGB by crop type in 2016 derived for the SMAPVEX16-MB study area. It is found that the average AGB among crop-type is clear while the one-standard deviation of AGB for each crop type is relatively small; we suggest that the small deviation is due to the small study area with the similar climate condition. Two sample <span class="html-italic">t</span>-test results suggest that the AGB average values among crop types are significantly different (<span class="html-italic">p</span> &lt; 0.05) except for Oats vs. Triticale, Triticale vs. Spring Wheat, and Winter Wheat vs. Hemp.</p>
Full article ">Figure 10
<p>Annual crop above-ground biomass (AGB) in 2016 derived from simulated crop GPP. For the six target crops, their explicit GPP–AGB relationships are applied; for the other crop types, the regression coefficients averaged for all crop types in <a href="#remotesensing-13-00806-f003" class="html-fig">Figure 3</a> are applied. The peas and beans are characterized with low AGB values (blue); while potatoes within center-pivot irrigation have high AGB values (red).</p>
Full article ">Figure 11
<p>Averaged AGB by crop type in 2016 derived for the SMAPVEX16-MB study area. Beans and peas are characterized by lower AGB values.</p>
Full article ">
25 pages, 18777 KiB  
Article
Intraday Variation Mapping of Population Age Structure via Urban-Functional-Region-Based Scaling
by Yuncong Zhao, Yuan Zhang, Hongyan Wang, Xin Du, Qiangzi Li and Jiong Zhu
Remote Sens. 2021, 13(4), 805; https://doi.org/10.3390/rs13040805 - 22 Feb 2021
Cited by 5 | Viewed by 2623
Abstract
The spatial distribution of the population is uneven for various reasons, such as urban-rural differences and geographical conditions differences. As the basic element of the natural structure of the population, the age structure composition of populations also varies considerably across the world. Obtaining [...] Read more.
The spatial distribution of the population is uneven for various reasons, such as urban-rural differences and geographical conditions differences. As the basic element of the natural structure of the population, the age structure composition of populations also varies considerably across the world. Obtaining accurate and spatiotemporal population age structure maps is crucial for calculating population size at risk, analyzing populations mobility patterns, or calculating health and development indicators. During the past decades, many population maps in the form of administrative units and grids have been produced. However, these population maps are limited by the lack of information on the change of population distribution within a day and the age structure of the population. Urban functional regions (UFRs) are closely related to population mobility patterns, which can provide information about population variation intraday. Focusing on the area within the Beijing Fifth Ring Road, the political and economic center of Beijing, we showed how to use the temporal scaling factors obtained by analyzing the population survey sampling data and population dasymetric maps in different categories of UFRs to realize the intraday variation mapping of elderly individuals and children. The population dasymetric maps were generated on the basis of covariates related to population. In this article, 50 covariates were calculated from remote sensing data and geospatial data. However, not all covariates are associate with population distribution. In order to improve the accuracy of dasymetric maps and reduce the cost of mapping, it is necessary to select the optimal subset for the dasymetric model of elderly and children. The random forest recursive feature elimination (RF-RFE) algorithm was introduced to obtain the optimal subset of different age groups of people and generate the population dasymetric model in this article, as well as to screen out the optimal subset with 38 covariates and 26 covariates for the dasymetric models of the elderly and children, respectively. An accurate UFR identification method combining point of interest (POI) data and OpenStreetMap (OSM) road network data is also introduced in this article. The overall accuracy of the identification results of UFRs was 70.97%, which is quite accurate. The intraday variation maps of population age structure on weekdays and weekends were made within the Beijing Fifth Ring Road. Accuracy evaluation based on sampling data found that the overall accuracy was relatively high—R2 for each time period was higher than 0.5 and root mean square error (RMSE) was less than 0.05. On weekdays in particular, R2 for each time period was higher than 0.61 and RMSE was less than 0.02. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area and its location in Beijing. The image data are L14-level data obtained from Google Earth.</p>
Full article ">Figure 2
<p>The study area and sampling points in this article. The image data is L14-level data obtained from Google Earth.</p>
Full article ">Figure 3
<p>Diagram of the methodological framework of intraday variation mapping of population age structure.</p>
Full article ">Figure 4
<p>The relationship between different root mean square error (RMSE) values and the optimal subset of different numbers of variables (<b>a</b>) for the elderly subset and (<b>b</b>) for the child subset. (The red lines represent the minimum RSME of the model based on different optimal subsets, indicating that the variable numbers of the optimal subset of elderly and children were 38 and 26, respectively).</p>
Full article ">Figure 5
<p>Dasymetric population map of (<b>a</b>) the elderly and (<b>b</b>) children.</p>
Full article ">Figure 6
<p>Identification results of urban functional regions (UFRs) for the Fifth Ring Road in Beijing.</p>
Full article ">Figure 7
<p>Comparison of the results of UFR identification in this article and the results of visual interpretation: (<b>a</b>) correctly classified results, (<b>b</b>) the results of UFR in this study, and (<b>c</b>) the results of visual interpretation.</p>
Full article ">Figure 8
<p>The temporal scaling factor of (<b>a</b>) elderly individuals and (<b>b</b>) children.</p>
Full article ">Figure 9
<p>Intraday variation maps of the elderly and children on (<b>a</b>) weekdays and (<b>b</b>) weekends.</p>
Full article ">Figure 9 Cont.
<p>Intraday variation maps of the elderly and children on (<b>a</b>) weekdays and (<b>b</b>) weekends.</p>
Full article ">Figure 10
<p>Accuracy of the population maps of the elderly and children for (<b>a</b>) weekdays and (<b>b</b>) weekends.</p>
Full article ">Figure 10 Cont.
<p>Accuracy of the population maps of the elderly and children for (<b>a</b>) weekdays and (<b>b</b>) weekends.</p>
Full article ">Figure 11
<p>Accuracy of the intraday variation maps of population age structure in different UFRs: (<b>a</b>) open space, (<b>b</b>) public facilities, (<b>c</b>) residential, and (<b>d</b>) industry and commerce facilities.</p>
Full article ">
13 pages, 2741 KiB  
Communication
Modeling Transpiration with Sun-Induced Chlorophyll Fluorescence Observations via Carbon-Water Coupling Methods
by Huaize Feng, Tongren Xu, Liangyun Liu, Sha Zhou, Jingxue Zhao, Shaomin Liu, Ziwei Xu, Kebiao Mao, Xinlei He, Zhongli Zhu and Linna Chai
Remote Sens. 2021, 13(4), 804; https://doi.org/10.3390/rs13040804 - 22 Feb 2021
Cited by 17 | Viewed by 3819
Abstract
Successfully applied in the carbon research area, sun-induced chlorophyll fluorescence (SIF) has raised the interest of researchers from the water research domain. However, current works focused on the empirical relationship between SIF and plant transpiration (T), while the mechanistic linkage between them has [...] Read more.
Successfully applied in the carbon research area, sun-induced chlorophyll fluorescence (SIF) has raised the interest of researchers from the water research domain. However, current works focused on the empirical relationship between SIF and plant transpiration (T), while the mechanistic linkage between them has not been fully explored. Two mechanism methods were developed to estimate T via SIF, namely the water-use efficiency (WUE) method and conductance method based on the carbon–water coupling framework. The T estimated by these two methods was compared with T partitioned from eddy covariance instrument measured evapotranspiration at four different sites. Both methods showed good performance at the hourly (R2 = 0.57 for the WUE method and 0.67 for the conductance method) and daily scales (R2 = 0.67 for the WUE method and 0.78 for the conductance method). The developed mechanism methods provide theoretical support and have a great potential basis for deriving ecosystem T by satellite SIF observations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of model calibration.</p>
Full article ">Figure 2
<p>Scatterplot of hourly (first row) and daily (second row) T estimates from the three approaches (linear method, water-use efficiency (WUE) method, and conductance method) versus T<sub>zhou</sub> (estimated by underlying WUE method) with the 1:1 line (black line), along with coefficients of determination (R<sup>2</sup>) and root-mean-square error (RMSE, W/m<sup>2</sup>). Subfigures (<b>a</b>,<b>d</b>) refer to the linear method, (<b>b</b>,<b>e</b>) to the WUE method, and (<b>c</b>,<b>f</b>) to the conductance method.</p>
Full article ">Figure 3
<p>Scatterplot of hourly (first row) and daily (second row) LE estimates from the three approaches (linear method, WUE method, and conductance method) versus LE (observed from eddy covariance) with the 1:1 line (black line), along with coefficients of determination (R<sup>2</sup>) and root-mean-square error (RMSE, W/m<sup>2</sup>). Subfigures (<b>a</b>,<b>d</b>) refer to the linear method, (<b>b</b>,<b>e</b>) to the WUE method, and (<b>c</b>,<b>f</b>) to the conductance method.</p>
Full article ">Figure 4
<p>Scatterplot between sun-induced chlorophyll fluorescence (SIF) and T<sub>WUE</sub> for the four study sites. Subfigures (<b>a</b>–<b>d</b>) for hourly scale; (<b>e</b>–<b>h</b>) for daily scale.</p>
Full article ">
16 pages, 5256 KiB  
Article
UAV Based Estimation of Forest Leaf Area Index (LAI) through Oblique Photogrammetry
by Lingchen Lin, Kunyong Yu, Xiong Yao, Yangbo Deng, Zhenbang Hao, Yan Chen, Nankun Wu and Jian Liu
Remote Sens. 2021, 13(4), 803; https://doi.org/10.3390/rs13040803 - 22 Feb 2021
Cited by 21 | Viewed by 4432
Abstract
As a key canopy structure parameter, the estimation method of the Leaf Area Index (LAI) has always attracted attention. To explore a potential method to estimate forest LAI from 3D point cloud at low cost, we took photos from different angles of the [...] Read more.
As a key canopy structure parameter, the estimation method of the Leaf Area Index (LAI) has always attracted attention. To explore a potential method to estimate forest LAI from 3D point cloud at low cost, we took photos from different angles of the drone and set five schemes (O (0°), T15 (15°), T30 (30°), OT15 (0° and 15°) and OT30 (0° and 30°)), which were used to reconstruct 3D point cloud of forest canopy based on photogrammetry. Subsequently, the LAI values and the leaf area distribution in the vertical direction derived from five schemes were calculated based on the voxelized model. Our results show that the serious lack of leaf area in the middle and lower layers determines that the LAI estimate of O is inaccurate. For oblique photogrammetry, schemes with 30° photos always provided better LAI estimates than schemes with 15° photos (T30 better than T15, OT30 better than OT15), mainly reflected in the lower part of the canopy, which is particularly obvious in low-LAI areas. The overall structure of the single-tilt angle scheme (T15, T30) was relatively complete, but the rough point cloud details could not reflect the actual situation of LAI well. Multi-angle schemes (OT15, OT30) provided excellent leaf area estimation (OT15: R2 = 0.8225, RMSE = 0.3334 m2/m2; OT30: R2 = 0.9119, RMSE = 0.1790 m2/m2). OT30 provided the best LAI estimation accuracy at a sub-voxel size of 0.09 m and the best checkpoint accuracy (OT30: RMSE [H] = 0.2917 m, RMSE [V] = 0.1797 m). The results highlight that coupling oblique photography and nadiral photography can be an effective solution to estimate forest LAI. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Distribution of sample plots.</p>
Full article ">Figure 2
<p>Unmanned aerial vehicle (UAV) orthogonal flight plan.</p>
Full article ">Figure 3
<p>General workflow.</p>
Full article ">Figure 4
<p>Homologous point matching result.</p>
Full article ">Figure 5
<p>Schematic diagram of Leaf Area Index (LAI) field measurement scheme. The red line is the boundary of the sample plot, the black spot is the measuring station and the fan-shaped area is the orientation of LAI-2200 canopy analyzer probe rod.</p>
Full article ">Figure 6
<p>Boxplots describe the checkpoint absolute errors of the five schemes before and after point cloud registration in vertical and horizontal directions. Before point clouds registration (<b>a1</b>: horizontal direction; <b>a2</b>: vertical direction); after point clouds registration (<b>b1</b>: horizontal direction; <b>b2</b>: vertical direction).</p>
Full article ">Figure 7
<p>The distribution trend of average LAI of 72 plots derived from voxel model with different sub-voxel sizes (6 cm–15 cm, step is 1 cm).</p>
Full article ">Figure 8
<p>Root-mean-square error (RMSE) between the LAI derived from five schemes and effective leaf area index (LAIe) under different sub-voxel sizes.</p>
Full article ">Figure 9
<p>The best LAI estimation scheme for each of the five schemes, under different sub-voxel sizes. Sub-voxel size when the LAI of the five schemes was the best: O (0.12 m); OT15 (0.10 m); OT30 (0.09 m); T15 (0.12 m); T30 (0.12 m).</p>
Full article ">Figure 10
<p>In areas with different LAI levels, the leaf area density (LAD) distribution and the LAD increment relative to O of five schemes at different heights from ground (sub-voxel size is 0.09 m). a is the area with LAI interval 0.5–1, b is the area with LAI interval 1–1.5, c is the area with LAI interval 1.5–2, d is the area with LAI interval 2–2.5.</p>
Full article ">Figure 11
<p>The side view intuitively reflects the height difference between the surface model of O and OT30, where the colored area is OT30 and the gray area is O.</p>
Full article ">Figure 12
<p>Figure shows the local point cloud of OT30 and the vertical profile distribution of a single tree. For better display, the ground point clouds are removed.</p>
Full article ">
17 pages, 12229 KiB  
Article
Study and Evolution of the Dune Field of La Banya Spit in Ebro Delta (Spain) Using LiDAR Data and GPR
by Inmaculada Rodríguez-Santalla, David Gomez-Ortiz, Tomás Martín-Crespo, María José Sánchez-García, Isabel Montoya-Montes, Silvia Martín-Velázquez, Fernando Barrio, Jordi Serra, Juan Miguel Ramírez-Cuesta and Francisco Javier Gracia
Remote Sens. 2021, 13(4), 802; https://doi.org/10.3390/rs13040802 - 22 Feb 2021
Cited by 13 | Viewed by 3392
Abstract
La Banya spit, located at the south of the River Ebro Delta, is a sandy formation, developed by annexation of bars forming successive beach ridges, which are oriented and modeled by the eastern and southern waves. The initial ridges run parallel to the [...] Read more.
La Banya spit, located at the south of the River Ebro Delta, is a sandy formation, developed by annexation of bars forming successive beach ridges, which are oriented and modeled by the eastern and southern waves. The initial ridges run parallel to the coastline, and above them small dunes developed, the crests of which are oriented by dominant winds, forming foredune ridges and barchans. This study attempted to test a number of techniques in order to understand the dune dynamic on this coastal spit between 2004 and 2012: LiDAR data were used to reconstruct changes to the surface and volume of the barchan dunes and foredunes; ground-penetrating radar was applied to obtain an image of their internal structure, which would help to understand their recent evolution. GPS data taken on the field, together with application of GIS techniques, made possible the combination of results and their comparison. The results showed a different trend between the barchan dunes and the foredunes. While the barchan dunes increased in area and volume between 2004 and 2012, the foredunes lost thickness. This was also reflected in the radargrams: the barchan dunes showed reflectors related to the growth of the foresets while those associated with foredunes presented truncations associated with storm events. However, the global balance of dune occupation for the period 2004–2012 was positive. Full article
(This article belongs to the Special Issue Advances in Remote Sensing in Coastal Geomorphology)
Show Figures

Figure 1

Figure 1
<p>Map showing the location of La Banya spit. Inset map (<b>lower left</b>) shows the location of La Banya and surrounding region in the Ebro Delta. The green point is the location of SIMAR Point (Figure 3).</p>
Full article ">Figure 2
<p>Digital Elevation Model (DEM) showing different historical coastal positions of the La Banya spit (yellow lines). Above, (<b>a</b>,<b>b</b>), different dune morphologies present on La Banya Spit, and (<b>c</b>) evolution of La Banya spit from Somoza et al. (1998) [<a href="#B44-remotesensing-13-00802" class="html-bibr">44</a>]. Red point marks the zone where the shoreline movement is null and a change in the evolutionary trend.</p>
Full article ">Figure 3
<p>(<b>a</b>) Wave rose: significant wave height (m); (<b>b</b>) Wind rose: average wind speed (m/s). SIMAR Point 2,096,126 (location in <a href="#remotesensing-13-00802-f001" class="html-fig">Figure 1</a>). Period: 2000–2014. Efficiency: 98.76%. Source: Puertos del Estado (<a href="http://www.puertos.es" target="_blank">http://www.puertos.es</a> (accessed on 1 December 2020)).</p>
Full article ">Figure 4
<p>DTMs obtained from LiDAR Data 2004 (<b>a</b>) and 2012 (<b>b</b>); (<b>c</b>) Dune field position in 2004 and 2012.</p>
Full article ">Figure 5
<p>Situation of GPR profiles localized by areas (<b>b</b>); (<b>a</b>) track of sector A profiles; (<b>c</b>) track of sector G profiles; (<b>d</b>) track of sector E profiles.</p>
Full article ">Figure 6
<p>DEMs comparison for foredunes (<b>a</b>) and barchan (<b>b</b>) between 2004 and 2012, showing deposition and erosion zones (blue and red colors) between the coincident areas of both models; and loss or gain (yellow and orange colors) between non-coincident areas.</p>
Full article ">Figure 7
<p>(<b>a</b>) Profiles located by DSAS; (<b>b</b>) EPR data showing the results of coastal migration.</p>
Full article ">Figure 8
<p>Radargram P6 perpendicular to the main wind direction in southern hemidelta. Upper panel: processed radargram. Lower panel: interpretation of the radargram. The reflections have been interpreted as line drawings to facilitate the interpretation (see text for details). This profile presents low angle reflectors corresponding to foreslope accretion, as well as several lateral truncations (black rectangles) associated with erosive events.</p>
Full article ">Figure 9
<p>Radargram P15 parallel to the main wind direction in southern hemidelta and to the shoreline. Upper panel: processed radargram. Lower panel: interpretation of the radargram. The reflections have been interpreted as line drawings to facilitate the interpretation (see text for details). This profile presents reflectors with sub-horizontal to undulating geometry showing vertical accretion. Black rectangles indicate the location of reflectors associated with biotopographic accumulation.</p>
Full article ">Figure 10
<p>Upper panel: processed radargram. Lower panel: interpretation of the radargram. The reflections have been interpreted as line drawings to facilitate the interpretation (see text for details). This profile presents convex-upwards reflectors adapting to a morphology of the underlying beach ridge (A) together with low-angle reflectors associated with roll-over (B) and foreslope accretion (C).</p>
Full article ">Figure 11
<p>(<b>a</b>) Oblique aerial photograph of La Banya spit showing the barchan dunes present at a higher level than the foredunes (source: <a href="http://www.infosa.com/" target="_blank">http://www.infosa.com/</a> accessed on 1 December 2020) and (<b>b</b>) image of Sentinel of 21 January 2020, showing the flooded areas during Storm Gloria.</p>
Full article ">
19 pages, 30340 KiB  
Article
Identification of Abandoned Jujube Fields Using Multi-Temporal High-Resolution Imagery and Machine Learning
by Xingrong Li, Chenghai Yang, Hongri Zhang, Panpan Wang, Jia Tang, Yanqin Tian and Qing Zhang
Remote Sens. 2021, 13(4), 801; https://doi.org/10.3390/rs13040801 - 22 Feb 2021
Cited by 9 | Viewed by 2748
Abstract
The jujube industry plays a very important role in the agricultural industrial structure of Xinjiang, China. In recent years, the abandonment of jujube fields has gradually emerged. It is critical to inventory the abandoned land soon after it is generated to adjust agricultural [...] Read more.
The jujube industry plays a very important role in the agricultural industrial structure of Xinjiang, China. In recent years, the abandonment of jujube fields has gradually emerged. It is critical to inventory the abandoned land soon after it is generated to adjust agricultural production better and prevent the negative impacts from the abandonment (such as outbreaks of diseases, insect pests, and fires). High-resolution multi-temporal satellite remote sensing images can be used to identify subtle differences among crops and provide a good tool for solving this problem. In this research, both field-based and pixel-based classification approaches using field boundaries were used to estimate the percentage of abandoned jujube fields with multi-temporal high spatial resolution satellite images (Gaofen-1 and Gaofen-6) and the Random Forest algorithm. The results showed that both approaches produced good classification results and similar distributions of abandoned fields. The overall accuracy was 91.1% for the field-based classification and 90.0% for the pixel-based classification, and the Kappa was 0.866 and 0.848 for the respective classifications. The areas of abandoned land detected in the field-based and pixel-based classification maps were 806.09 ha and 828.21 ha, respectively, accounting for 8.97% and 9.11% of the study area. In addition, feature importance evaluations of the two approaches showed that the overall importance of texture features was higher than that of vegetation indices and that 31 October and 10 November were important dates for abandoned land detection. The methodology proposed in this study will be useful for identifying abandoned jujube fields and have the potential for large-scale application. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area. (<b>a</b>) Xinjiang Uygur Autonomous; (<b>b</b>) Hotan Prefecture; (<b>c</b>) Regiment 224 and GF6 color-infrared (CIR) composite image; (<b>d</b>) zoomed-in jujube fields.</p>
Full article ">Figure 2
<p>Three types of jujube fields: (<b>a</b>) in-production field; (<b>b</b>) abandoned field; and (<b>c</b>) alkali draining ditch.</p>
Full article ">Figure 3
<p>Flowchart of image processing.</p>
Full article ">Figure 4
<p>Location of ground surveyed fields in September 2019.</p>
Full article ">Figure 5
<p>The convergence of the accuracy of (<b>a</b>) Model 1 and (<b>b</b>) Model 2. The models were trained with an increasing number of decision trees, and the accuracy of classification was evaluated from the field-based (Model 1) and pixel-based (Model 2) classification, respectively.</p>
Full article ">Figure 6
<p>Land abandonment maps generated by (<b>a</b>) Model 1 and (<b>b</b>) Model 2 for the study area and the zoomed-in rectangle scenes (<b>a1</b>)–(<b>a3</b>) and (<b>b1</b>)–(<b>b3</b>) for the areas marked on the maps are shown at the right.</p>
Full article ">Figure 7
<p>The top 20 important features according to Mean Decrease in Gini. (<b>a</b>) Model 1 and (<b>b</b>) Model 2.</p>
Full article ">Figure 8
<p>Comprehensive importance of each feature of nine temporal images. (<b>a</b>) Model 1 and (<b>b</b>) Model 2.</p>
Full article ">Figure 9
<p>Importance of image dates. (<b>a</b>) Model 1 and (<b>b</b>) Model 2.</p>
Full article ">Figure 10
<p>The changes in overall accuracy (OA) and Kappa coefficient (Kappa) as the images were added. (<b>a</b>) Model 1, the images were gradually added from low to high of date importance; (<b>b</b>) Model 2, the images were gradually added from low to high of date importance; (<b>c</b>) Model 1, the image were gradually added from high to low of date importance; (<b>d</b>) Model 2, the image were gradually added from high to low of date importance.</p>
Full article ">Figure 10 Cont.
<p>The changes in overall accuracy (OA) and Kappa coefficient (Kappa) as the images were added. (<b>a</b>) Model 1, the images were gradually added from low to high of date importance; (<b>b</b>) Model 2, the images were gradually added from low to high of date importance; (<b>c</b>) Model 1, the image were gradually added from high to low of date importance; (<b>d</b>) Model 2, the image were gradually added from high to low of date importance.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop