[go: up one dir, main page]

Next Issue
Volume 15, August-1
Previous Issue
Volume 15, July-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 15, Issue 14 (July-2 2023) – 246 articles

Cover Story (view full-size image): This paper proposes a UAV-based computer vision framework for individual tree detection and health assessment. The proposed approach involves a two-stage process. Firstly, we propose a tree detection model by employing a hard negative mining strategy using RGB UAV images. Subsequently, we address the health classification problem by leveraging multi-band imagery-derived vegetation indices. The proposed framework achieves an F1-score of 86.24% for tree detection and an overall accuracy of 97.52% for the tree health assessment. This study demonstrates the robustness of the proposed framework in accurately assessing orchard tree health from UAV images. Moreover, the proposed approach holds potential for application in various other plantation settings, enabling plant detection and health assessments using UAV imagery. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 5793 KiB  
Technical Note
Spatially Variant Error Elimination for High-Resolution UAV SAR with Extremely Small Incident Angle
by Xintian Zhang, Shiyang Tang, Yi Ren, Jiahao Han, Chenghao Jiang, Juan Zhang, Yinan Li, Tong Jiang and Qi Dong
Remote Sens. 2023, 15(14), 3700; https://doi.org/10.3390/rs15143700 - 24 Jul 2023
Viewed by 1210
Abstract
Airborne synthetic aperture radar (SAR) is susceptible to atmospheric disturbance and other factors that cause the position offset error of the antenna phase center and motion error. In close-range detection scenarios, the large elevation angle may make it impossible to directly observe areas [...] Read more.
Airborne synthetic aperture radar (SAR) is susceptible to atmospheric disturbance and other factors that cause the position offset error of the antenna phase center and motion error. In close-range detection scenarios, the large elevation angle may make it impossible to directly observe areas near the underlying plane, resulting in observation blind spots. In cases where the illumination elevation angle is extremely large, the influence of range variant envelope error and phase modulations becomes more serious, and traditional two-step motion compensation (MOCO) methods may fail to provide accurate imaging. In addition, conventional phase gradient autofocus (PGA) algorithms suffer from reduced performance in scenes with few strong scattering points. To address these practical challenges, we propose an improved phase-weighted estimation PGA algorithm that analyzes the motion error of UAV SAR under a large elevation angle, providing a solution for high-order range variant motion error. Based on this algorithm, we introduce a combined focusing method that applies a threshold value for selection and optimization. Unlike traditional MOCO methods, our proposed method can more accurately compensate for spatially variant motion error in the case of scenes with few strong scattering points, indicating its wider applicability. The effectiveness of our proposed approach is verified by simulation and real data experimental results. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>SAR geometric model with motion error.</p>
Full article ">Figure 2
<p>The range error caused by the <span class="html-italic">x</span>-axis motion error. (<b>a</b>) <span class="html-italic">R<sub>s</sub></span> = 10 km. (<b>b</b>) <span class="html-italic">R<sub>s</sub></span> = 5 km.</p>
Full article ">Figure 3
<p>The relationship of antenna pitch with respect to the slant range.</p>
Full article ">Figure 4
<p>Spatially variant error: (<b>a</b>) <span class="html-italic">R<sub>s</sub></span> = 10 km. (<b>b</b>) <span class="html-italic">R<sub>s</sub></span> = 5 km.</p>
Full article ">Figure 5
<p>Focusing quality of the classic PGA for featureless.</p>
Full article ">Figure 6
<p>Phase error from first-order to fourth-order range spatial variation: (<b>a</b>) first-order; (<b>b</b>) second-order; (<b>c</b>) third-order; (<b>d</b>) fourth-order.</p>
Full article ">Figure 7
<p>Flow chart of combined autofocusing algorithm.</p>
Full article ">Figure 8
<p>Simulation scene.</p>
Full article ">Figure 9
<p>Simulated velocities along three axes: (<b>a</b>) <math display="inline"><semantics><mi>x</mi></semantics></math>-axis; (<b>b</b>) <math display="inline"><semantics><mi>y</mi></semantics></math>-axis; (<b>c</b>) <math display="inline"><semantics><mi>z</mi></semantics></math>-axis.</p>
Full article ">Figure 10
<p>Spatially variant error caused by motion error.</p>
Full article ">Figure 11
<p>Imaging results of traditional approach: (<b>a</b>) center point; (<b>b</b>) edge point.</p>
Full article ">Figure 12
<p>Imaging results of the proposed approach: (<b>a</b>) center point; (<b>b</b>) edge point.</p>
Full article ">Figure 13
<p>Residual phase error of (<b>a</b>) the traditional approach; (<b>b</b>) the proposed approach.</p>
Full article ">Figure 14
<p>Imaging results of (<b>a</b>) the traditional approach; (<b>b</b>) the proposed approach.</p>
Full article ">Figure 15
<p>Real-data imaging results in large scenes: (<b>a</b>) traditional algorithm; (<b>b</b>) proposed algorithm.</p>
Full article ">Figure 16
<p>Real-data imaging results in small scenes: (<b>a</b>) traditional algorithm; (<b>b</b>) proposed algorithm.</p>
Full article ">
21 pages, 5570 KiB  
Article
Integrated Node Infrastructure for Future Smart City Sensing and Response
by Dong Chen, Xiang Zhang, Wei Zhang and Xing Yin
Remote Sens. 2023, 15(14), 3699; https://doi.org/10.3390/rs15143699 - 24 Jul 2023
Viewed by 1744
Abstract
Emerging smart cities and digital twins are currently built from heterogenous cutting-edge low-power remote sensing systems limited by diverse inefficient communication and information technologies. Future smart cities delivering time-critical services and responses must transition towards utilizing massive numbers of sensors and more efficient [...] Read more.
Emerging smart cities and digital twins are currently built from heterogenous cutting-edge low-power remote sensing systems limited by diverse inefficient communication and information technologies. Future smart cities delivering time-critical services and responses must transition towards utilizing massive numbers of sensors and more efficient integrated systems that rapidly communicate intelligent self-adaptation for collaborative operations. Here, we propose a critical futuristic integrated communication element named City Sensing Base Station (CSBS), inspired by base stations for cell phones that address similar concerns. A CSBS is designed to handle massive volumes of heterogeneous observation data that currently need to be upgraded by middleware or registered. It also provides predictive and interpolation modelling for the control of sensors and response units such as emergency services and drones. A prototype of CSBS demonstrated that it could unify readily available heterogeneous sensing devices, including surveillance video, unmanned aerial vehicles, and ground sensor webs. Collaborative observation capability was also realized by integrating different object detection sources using advanced computer-vision technologies. Experiments with a traffic accident and water pipeline emergency showed sensing and intelligent analyses were greatly improved. CSBS also significantly reduced redundant Internet connections while maintaining high efficiency. This innovation successfully integrates high-density, high-diversity, and high-precision sensing in a distributed way for the future digital twin of cities. Full article
(This article belongs to the Special Issue GeoAI and EO Big Data Driven Advances in Earth Environmental Science)
Show Figures

Figure 1

Figure 1
<p>Development process of the ground observation. There are three main stages: manual processing, which is based on manual field measurements and recorded in written form; automated observation, which is a process of automatic measurement and automatic recording through computer systems and process instruments; and City Intelligent Service, which is The City Intelligent Service, is a widely used service that records and saves all observation data through the Internet and the Internet of Things and provides a visualized intelligent information service system.</p>
Full article ">Figure 2
<p>The schematic of legacy sensor web and sensor web with CSBS. This schematic is divided into two parts. The upper describes the legacy sensor web, and the lower describes the architecture via CSBS. The schematic shows that the legacy infrastructure suffers from delays due to layers of network services: data transmission, network forwarding, and data protocol parsing. One-way information transmission also prevents reverse control flow transmission from the service center to sensors.</p>
Full article ">Figure 3
<p>(<b>a</b>) Methods and processes for accessing heterogeneous observation platforms of CSBS. At the bottom left is the CSBS prototype device and the core of CSBS is the Access Processing Board. The Access Processing Board designs with a four−layer architecture. The first layer is the control circuit layer, including the SoC (System−on−Chip) and the RAM (Random Access Memory) for running and storing the operating system and data, the ROM (Read-Only Memory), and the power management module. The second layer is cellular data access, including the baseband for 5G+GNSS NB-IoT. The third layer is the multi-protocol wireless access layer, including ZigBee, Lora, BLE, RFID, and high-throughput communication modules with RF shielding between modules. The fourth layer is the extension layer, which can add physical layer communication methods not currently available in CSBS to improve sensory access scalability. On the right side is the schematic diagram of the multi-protocol observation platform fusion access, which enables the hybrid access of the sensing platform at a certain block scale in the city. The bottom right side is a case study of data protocol access resolution, including CSBS resolution schematics for common Modbus and transparent transport. The CSBS observation system realizes the management, access, and fusion control and displays observation resources at the block scale. (<b>b</b>) Multi-Channel Polling Access Model for LoPAN Protocols. It illustrates the workflow of channel scanning and data identification procedure in UAM.</p>
Full article ">Figure 4
<p>Positioning of CSBS in the city sensing system. The figure contains three parts. In-Block Info Services, CSBSs in Different Blocks, and City Info Services. With the support of CSBS, the observation information of sensing devices located in the same block can be processed directly by the CSBS. The CSBS also provides web service- and RDS-based text information service to the end-user in the block. There are different blocks in a city, and the CSBS in each block process the observation data effectively. The GSW service center subscribes to these data from CSBS, performs further processing and display, and GSW service center provides the management of CSBS.</p>
Full article ">Figure 5
<p>Transmission quality as measured by decay of transmission bandwidth with distance in experiment zone.</p>
Full article ">Figure 6
<p>The process of a collaborative analysis of multiple observation platforms with groundwater pipe rupture. (* in this Figure means multiplication and the date format follows Year-Month-Day).</p>
Full article ">Figure 7
<p>Model of collaborating heterogeneous sensing platforms in CSBS under traffic accident scenarios. In this figure, the left side shows the co-observation model with the in situ observation station, the UAV, and the robot; the flowchart on the right represents a schematic diagram of the co-observation procedure based on the co-observation model.</p>
Full article ">Figure 8
<p>Case 2: CSBS-based multi-scale automatic observation and reconstruction of traffic accident scenes in cities. The left side shows the observation process of a traffic anomaly event, in which a traffic atmosphere sensing site and a traffic road camera carry out anomaly identification and set off a collaborative observation procedure to observe the traffic accident site by drones and robots actively. The right side shows an instant 3D retrospective of the event published through WebGIS service. (In this figure, data format follows Year-Month-Day, m3 means cubic meter. The Chinese characters in the software in the lower right corner indicate “This is the detailed information of the sensing result”).</p>
Full article ">
25 pages, 14012 KiB  
Article
Despeckling of SAR Images Using Residual Twin CNN and Multi-Resolution Attention Mechanism
by Blaž Pongrac and Dušan Gleich
Remote Sens. 2023, 15(14), 3698; https://doi.org/10.3390/rs15143698 - 24 Jul 2023
Cited by 2 | Viewed by 1559
Abstract
The despeckling of synthetic aperture radar images using two different convolutional neural network architectures is presented in this paper. The first method presents a novel Siamese convolutional neural network with a dilated convolutional network in each branch. Recently, attention mechanisms have been introduced [...] Read more.
The despeckling of synthetic aperture radar images using two different convolutional neural network architectures is presented in this paper. The first method presents a novel Siamese convolutional neural network with a dilated convolutional network in each branch. Recently, attention mechanisms have been introduced to convolutional networks to better model and recognize features. Therefore, we propose a novel design for a convolutional neural network using an attention mechanism for an encoder–decoder-type network. The framework consists of a multiscale spatial attention network to improve the modeling of semantic information at different spatial levels and an additional attention mechanism to optimize feature propagation. Both proposed methods are different in design but they provide comparable despeckling results in subjective and objective measurements in terms of correlated speckle noise. The experimental results are evaluated on both synthetically generated speckled images and real SAR images. The methods proposed in this paper are able to despeckle SAR images and preserve SAR features. Full article
(This article belongs to the Special Issue Advance in SAR Image Despeckling)
Show Figures

Figure 1

Figure 1
<p>Architecture of the denoising convolutional neural network [<a href="#B14-remotesensing-15-03698" class="html-bibr">14</a>].</p>
Full article ">Figure 2
<p>SAR image despeckling assuming a multiplicative noise model [<a href="#B20-remotesensing-15-03698" class="html-bibr">20</a>].</p>
Full article ">Figure 3
<p>SAR-DRN for SAR image despeckling [<a href="#B20-remotesensing-15-03698" class="html-bibr">20</a>].</p>
Full article ">Figure 4
<p>Architecture of U-shaped CNN [<a href="#B22-remotesensing-15-03698" class="html-bibr">22</a>].</p>
Full article ">Figure 5
<p>Architecture of the Siamese-based Dilated Residual Convolutional Neural Network (SDRCNN).</p>
Full article ">Figure 6
<p>Geocoded SAR images of an urban area. (<b>a</b>) ALOS 2 image with a ground resolution of 4 m. (<b>b</b>) TerraSAR-X image with a ground resolution of 1 m.</p>
Full article ">Figure 7
<p>Structure of the Attention-Based Convolutional Neural Network (ABCNN).</p>
Full article ">Figure 8
<p>Structure of the DRA network used within the proposed network.</p>
Full article ">Figure 9
<p>Structure of the Attention Supervision Module (ASM).</p>
Full article ">Figure 10
<p>Multi-resolution attention mechanism (MAM). (<b>a</b>) Structure of the MAM. (<b>b</b>) Structure of the residual network within the MAM and structure of the ECA network.</p>
Full article ">Figure 11
<p>Synthetic SAR homogeneous images. (<b>a</b>) Original image. (<b>b</b>) Speckled image. (<b>c</b>) Image despeckled using the SDRCNN method. (<b>d</b>) Image despeckled using the ABCNN method. (<b>e</b>) Image despeckled using the SARBM3D method. (<b>f</b>) Image despeckled using the DCNN method. (<b>g</b>) Image despeckled using the OCNN method. (<b>h</b>) Image despeckled using the SAR-CAM method.</p>
Full article ">Figure 12
<p>Synthetic SAR square images. (<b>a</b>) Original image. (<b>b</b>) Speckled image. (<b>c</b>) Image despeckled using the SDRCNN method. (<b>d</b>) Image despeckled using the ABCNN method. (<b>e</b>) Image despeckled using the SARBM3D method. (<b>f</b>) Image despeckled using the DCNN method. (<b>g</b>) Image despeckled using the OCNN method. (<b>h</b>) Image despeckled using the SAR-CAM method.</p>
Full article ">Figure 13
<p>Synthetic SAR image depicting a building. (<b>a</b>) Original image. (<b>b</b>) Speckled image. (<b>c</b>) Image despeckled using the SDRCNN method. (<b>d</b>) Image despeckled using the ABCNN method. (<b>e</b>) Image despeckled using the SARBM3D method. (<b>f</b>) Image despeckled using the DCNN method. (<b>g</b>) Image despeckled using the OCNN method. (<b>h</b>) Image despeckled using the SAR-CAM method.</p>
Full article ">Figure 14
<p>Synthetic SAR image depicting a corner reflector. (<b>a</b>) Original image. (<b>b</b>) Speckled image. (<b>c</b>) Image despeckled using the SDRCNN method. (<b>d</b>) Image despeckled using the ABCNN method. (<b>e</b>) Image despeckled using the SARBM3D method. (<b>f</b>) Image despeckled using the DCNN method. (<b>g</b>) Image despeckled using the OCNN method. (<b>h</b>) Image despeckled using the SAR-CAM method.</p>
Full article ">Figure 15
<p>Synthetic SAR DEM image. (<b>a</b>) Original image. (<b>b</b>) Speckled image. (<b>c</b>) Image despeckled using the SDRCNN method. (<b>d</b>) Image despeckled using the ABCNN method. (<b>e</b>) Image despeckled using the SARBM3D method. (<b>f</b>) Image despeckled using the DCNN method. (<b>g</b>) Image despeckled using the OCNN method. (<b>h</b>) Image despeckled using the SAR-CAM method.</p>
Full article ">Figure 16
<p>Real SAR images. (<b>a</b>) Mosaic of SAR images, <math display="inline"><semantics><mrow><mn>800</mn><mo>×</mo><mn>800</mn></mrow></semantics></math> pixels in size. (<b>b</b>) Despeckled using the SDRCNN method. (<b>c</b>) Despeckled using the ABCNN method. (<b>d</b>) Despeckled using the SARBM3D method. (<b>e</b>) Despeckled using the DCNN method. (<b>f</b>) Image despeckled using the OCNN method. (<b>g</b>) Image despeckled using the SAR-CAM method.</p>
Full article ">Figure 17
<p>Ratio images between the original SAR image shown in <a href="#remotesensing-15-03698-f016" class="html-fig">Figure 16</a>a and the despeckled images. (<b>a</b>) Original SAR image compared to the despeckled image using the SDRCNN method shown in <a href="#remotesensing-15-03698-f016" class="html-fig">Figure 16</a>b. (<b>b</b>) Original SAR image compared to the despeckled image using the ABCNN method shown in <a href="#remotesensing-15-03698-f016" class="html-fig">Figure 16</a>c. (<b>c</b>) Original SAR image compared to the despeckled image using the SARBM3D method shown in <a href="#remotesensing-15-03698-f016" class="html-fig">Figure 16</a>d. (<b>d</b>) Original SAR image compared to the despeckled image using the DCNN method shown in <a href="#remotesensing-15-03698-f016" class="html-fig">Figure 16</a>e. (<b>e</b>) Original SAR image compared to the despeckled image using the OCNN method shown in <a href="#remotesensing-15-03698-f016" class="html-fig">Figure 16</a>f. (<b>f</b>) Original SAR image compared to the despeckled image using the SAR-CAM method shown in <a href="#remotesensing-15-03698-f016" class="html-fig">Figure 16</a>g.</p>
Full article ">Figure 18
<p>Real SAR image. (<b>a</b>) Original SAR image ©DLR 2012 (<math display="inline"><semantics><mrow><mn>1024</mn><mo>×</mo><mn>1024</mn></mrow></semantics></math> pixels). (<b>b</b>) Despeckled using the SDRCNN method. (<b>c</b>) Despeckled using the ABCNN method. (<b>d</b>) Despeckled using the SARBM3D method. (<b>e</b>) Despeckled using the DCNN method. (<b>f</b>) Image despeckled using the OCNN method. (<b>g</b>) Image despeckled using the SAR-CAM method.</p>
Full article ">Figure 19
<p>Ratio images between original the SAR image shown in <a href="#remotesensing-15-03698-f018" class="html-fig">Figure 18</a>a and the despeckled images. (<b>a</b>) Original SAR image compared to the despeckled image using the SDRCNN method shown in <a href="#remotesensing-15-03698-f018" class="html-fig">Figure 18</a>b. (<b>b</b>) Original SAR image compared to the despeckled image using the ABCNN method shown in <a href="#remotesensing-15-03698-f018" class="html-fig">Figure 18</a>c. (<b>c</b>) Original SAR image compared to the despeckled image using the SARBM3D method shown in <a href="#remotesensing-15-03698-f018" class="html-fig">Figure 18</a>d. (<b>d</b>) Original SAR image compared to the despeckled image using the DCNN method shown in <a href="#remotesensing-15-03698-f018" class="html-fig">Figure 18</a>e. (<b>e</b>) Original SAR image compared to the despeckled image using the OCNN method shown in <a href="#remotesensing-15-03698-f018" class="html-fig">Figure 18</a>f. (<b>f</b>) Original SAR image compared to the despeckled image using the SAR-CAM method shown in <a href="#remotesensing-15-03698-f018" class="html-fig">Figure 18</a>g.</p>
Full article ">
21 pages, 5993 KiB  
Article
Locality Preserving Property Constrained Contrastive Learning for Object Classification in SAR Imagery
by Jing Wang, Sirui Tian, Xiaolin Feng, Bo Zhang, Fan Wu, Hong Zhang and Chao Wang
Remote Sens. 2023, 15(14), 3697; https://doi.org/10.3390/rs15143697 - 24 Jul 2023
Cited by 1 | Viewed by 1221
Abstract
Robust unsupervised feature learning is a critical yet tough task for synthetic aperture radar (SAR) automatic target recognition (ATR) with limited labeled data. The developing contrastive self-supervised learning (CSL) method, which learns informative representations by solving an instance discrimination task, provides a novel [...] Read more.
Robust unsupervised feature learning is a critical yet tough task for synthetic aperture radar (SAR) automatic target recognition (ATR) with limited labeled data. The developing contrastive self-supervised learning (CSL) method, which learns informative representations by solving an instance discrimination task, provides a novel method for learning discriminative features from unlabeled SAR images. However, the instance-level contrastive loss can magnify the differences between samples belonging to the same class in the latent feature space. Therefore, CSL can dispel these targets from the same class and affect the downstream classification tasks. In order to address this problem, this paper proposes a novel framework called locality preserving property constrained contrastive learning (LPPCL), which not only learns informative representations of data but also preserves the local similarity property in the latent feature space. In LPPCL, the traditional InfoNCE loss of the CSL models is reformulated in a cross-entropy form where the local similarity of the original data is embedded as pseudo labels. Furthermore, the traditional two-branch CSL architecture is extended to a multi-branch structure, improving the robustness of models trained with limited batch sizes and samples. Finally, the self-attentive pooling module is used to replace the global average pooling layer that is commonly used in most of the standard encoders, which provides an adaptive method for retaining information that benefits downstream tasks during the pooling procedure and significantly improves the performance of the model. Validation and ablation experiments using MSTAR datasets found that the proposed framework outperformed the classic CSL method and achieved state-of-the-art (SOTA) results. Full article
(This article belongs to the Special Issue SAR-Based Signal Processing and Target Recognition)
Show Figures

Figure 1

Figure 1
<p>Basic concept of the SimCLR framework. <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>a</mi><mi>n</mi><mi>c</mi><mi>h</mi><mi>o</mi><mi>r</mi></mrow><mn>1</mn></msubsup></mrow></semantics></math> and <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>a</mi><mi>n</mi><mi>c</mi><mi>h</mi><mi>o</mi><mi>r</mi></mrow><mn>2</mn></msubsup></mrow></semantics></math> are two augmented views generated by the same anchor instance <math display="inline"><semantics><mrow><msub><mi>x</mi><mrow><mi>a</mi><mi>n</mi><mi>c</mi><mi>h</mi><mi>o</mi><mi>r</mi></mrow></msub></mrow></semantics></math>, which are regarded as a positive pair by SimCLR, while all other views from different instances are regarded as negative samples, whether from the same class like <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>p</mi><mi>o</mi><mi>s</mi><mi>i</mi><mi>t</mi><mi>i</mi><mi>v</mi><mi>e</mi></mrow><mn>1</mn></msubsup></mrow></semantics></math> and <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>p</mi><mi>o</mi><mi>s</mi><mi>i</mi><mi>t</mi><mi>i</mi><mi>v</mi><mi>e</mi></mrow><mn>2</mn></msubsup></mrow></semantics></math> and from different class like <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>n</mi><mi>e</mi><mi>g</mi><mi>a</mi><mi>t</mi><mi>i</mi><mi>v</mi><mi>e</mi></mrow><mn>1</mn></msubsup></mrow></semantics></math> and <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>n</mi><mi>e</mi><mi>g</mi><mi>a</mi><mi>t</mi><mi>i</mi><mi>v</mi><mi>e</mi></mrow><mn>2</mn></msubsup></mrow></semantics></math>, which is also the reason for biased sampling.</p>
Full article ">Figure 2
<p>Basic architecture of the proposed model. In LPPCL, for augmented view <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>a</mi><mi>n</mi><mi>c</mi><mi>h</mi><mi>o</mi><mi>r</mi></mrow><mi>i</mi></msubsup></mrow></semantics></math>, the positive samples include not only augmented view <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>a</mi><mi>n</mi><mi>c</mi><mi>h</mi><mi>o</mi><mi>r</mi></mrow><mi>j</mi></msubsup></mrow></semantics></math> from the anchor instance <math display="inline"><semantics><mrow><msub><mi>x</mi><mrow><mi>a</mi><mi>n</mi><mi>c</mi><mi>h</mi><mi>o</mi><mi>r</mi></mrow></msub></mrow></semantics></math>, but also views <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>p</mi><mi>o</mi><mi>s</mi><mi>i</mi><mi>t</mi><mi>i</mi><mi>v</mi><mi>e</mi></mrow><mi>i</mi></msubsup></mrow></semantics></math> and <math display="inline"><semantics><mrow><msubsup><mi>V</mi><mrow><mi>p</mi><mi>o</mi><mi>s</mi><mi>i</mi><mi>t</mi><mi>i</mi><mi>v</mi><mi>e</mi></mrow><mi>j</mi></msubsup></mrow></semantics></math> of <math display="inline"><semantics><mrow><msub><mi>x</mi><mrow><mi>a</mi><mi>n</mi><mi>c</mi><mi>h</mi><mi>o</mi><mi>r</mi></mrow></msub></mrow></semantics></math>’s adjacent sample <math display="inline"><semantics><mrow><msub><mi>x</mi><mrow><mi>p</mi><mi>o</mi><mi>s</mi><mi>i</mi><mi>t</mi><mi>i</mi><mi>v</mi><mi>e</mi></mrow></msub></mrow></semantics></math>, which quite possibly comes from the same class as <math display="inline"><semantics><mrow><msub><mi>x</mi><mrow><mi>a</mi><mi>n</mi><mi>c</mi><mi>h</mi><mi>o</mi><mi>r</mi></mrow></msub></mrow></semantics></math>.</p>
Full article ">Figure 3
<p>The multi-branch structure of the proposed model.</p>
Full article ">Figure 4
<p>The overall framework of the model architecture. This improves the ResNet50 network and uses self-attentive pooling layers after the convolutional layers. The basic block of the Resnet network is shown in the lower left corner, where <math display="inline"><semantics><mi>c</mi></semantics></math> denotes the number of channels. The module of the non-local self-attentive pooling is shown in the lower right corner, with the red box shown being the sensitive fields of pooling weights.</p>
Full article ">Figure 5
<p>Non-local self-attentive pooling.</p>
Full article ">Figure 6
<p>Photographs and corresponding SAR images of the MSTAR dataset.</p>
Full article ">Figure 7
<p>The training loss curve of the proposed model.</p>
Full article ">Figure 8
<p>Recognition rates of different models.</p>
Full article ">Figure 9
<p>Recognition rates of different models.</p>
Full article ">Figure 10
<p>MSTAR data with different SNRs.</p>
Full article ">Figure 11
<p>Recognition results at different SNR levels.</p>
Full article ">Figure 12
<p>MSTAR data at different resolutions.</p>
Full article ">Figure 13
<p>Recognition results at different resolutions.</p>
Full article ">
14 pages, 24765 KiB  
Communication
Sea Surface Chlorophyll-a Concentration Retrieval from HY-1C Satellite Data Based on Residual Network
by Guiying Yang, Xiaomin Ye, Qing Xu, Xiaobin Yin and Siyang Xu
Remote Sens. 2023, 15(14), 3696; https://doi.org/10.3390/rs15143696 - 24 Jul 2023
Cited by 4 | Viewed by 1650
Abstract
A residual network (ResNet) model was proposed for estimating Chl-a concentrations in global oceans from the remote sensing reflectance (Rrs) observed by the Chinese ocean color and temperature scanner (COCTS) onboard the HY-1C satellite. A total of 52 images from September [...] Read more.
A residual network (ResNet) model was proposed for estimating Chl-a concentrations in global oceans from the remote sensing reflectance (Rrs) observed by the Chinese ocean color and temperature scanner (COCTS) onboard the HY-1C satellite. A total of 52 images from September 2018 to September 2019 were collected, and the label data were from the multi-task Ocean Color-Climate Change Initiative (OC-CCI) daily products. The results of feature selection and sensitivity experiments show that the logarithmic values of Rrs565 and Rrs520/Rrs443, Rrs565/Rrs490, Rrs520/Rrs490, Rrs490/Rrs443, and Rrs670/Rrs565 are the optimal input parameters for the model. Compared with the classical empirical OC4 algorithm and other machine learning models, including the artificial neural network (ANN), deep neural network (DNN), and random forest (RF), the ResNet retrievals are in better agreement with the OC-CCI Chl-a products. The root-mean-square error (RMSE), unbiased percentage difference (UPD), and correlation coefficient (logarithmic, R(log)) are 0.13 mg/m3, 17.31%, and 0.97, respectively. The performance of the ResNet model was also evaluated against in situ measurements from the Aerosol Robotic Network-Ocean Color (AERONET-OC) and field survey observations in the East and South China Seas. Compared with DNN, ANN, RF, and OC4 models, the UPD is reduced by 5.9%, 0.7%, 6.8%, and 6.3%, respectively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>HY-1C/COCTS observation in the Bohai Sea and Yellow Sea at 02:53 UTC on November 11, 2019. (<b>a</b>–<b>e</b>): R<sub>rs</sub> at central wavelengths of 443, 490, 520, 565, and 670 nm, respectively.</p>
Full article ">Figure 2
<p>Spatial distribution of in situ measurements of sea surface Chl-a concentration matched with COCTS/HY-1C observations. Red and blue dots denote the locations of AERONET-OC data and field surveys in the East and South China Seas, respectively.</p>
Full article ">Figure 3
<p>Spatial distribution of OC-CCI products matched with COCTS/HY-1C data: (<b>a</b>) training dataset; (<b>b</b>) testing dataset.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic diagram of ResNet for sea surface Chl-a concentration estimation. The yellow circle represents the output of the second layer. (<b>b</b>) Schematic diagram of the residual block.</p>
Full article ">Figure 5
<p>Importance of different model input parameters based on the PFI analysis.</p>
Full article ">Figure 6
<p>(<b>a</b>) RMSE, (<b>b</b>) UPD, and (<b>c</b>) R(log) between sea surface Chl-a concentration estimated using the ResNet or DNN model and OC-CCI products. The numbers 1 and 2 after the model’s name represent the non-classification model and classification model, respectively.</p>
Full article ">Figure 7
<p>Comparison of sea surface Chl-a concentration retrieved using (<b>a</b>) ResNet, (<b>b</b>) DNN, (<b>c</b>) ANN, (<b>d</b>) RF and (<b>e</b>) OC4 and the OC-CCI product.</p>
Full article ">Figure 8
<p>Frequency distribution of sea surface Chl-a concentration retrieved using (<b>a</b>) ResNet, (<b>b</b>) DNN, (<b>c</b>) ANN, (<b>d</b>) RF and (<b>e</b>) OC4 and that of the OC-CCI product.</p>
Full article ">Figure 9
<p>Comparison of sea surface Chl-a concentrations retrieved using (<b>a</b>) ResNet, (<b>b</b>) DNN, (<b>c</b>) ANN, (<b>d</b>) RF and (<b>e</b>) OC4 and in situ measurements. Black and blue dots denote measurements from the AERONET-OC data and field survey in the East and South China Seas, respectively.</p>
Full article ">Figure 10
<p>(<b>a</b>) Sea surface Chl-a concentration in the Bohai Sea and Yellow Sea estimated from COCTS/HY-1C at 02:45 UTC on 24 January 2019. (<b>b</b>) Terra/MODIS product at 02:50 UTC on the same day (<a href="http://oceancolor.gsfc.nasa.gov/" target="_blank">http://oceancolor.gsfc.nasa.gov/</a>, accessed on 12 December 2022). (<b>c</b>) Scatter plot between COCTS and MODIS-derived Chl-a concentrations.</p>
Full article ">Figure 11
<p>Variation of the UPD of the ResNet-derived Chl-a concentration against in situ measurements with offshore distance.</p>
Full article ">
17 pages, 5422 KiB  
Article
Improving the Accuracy of TanDEM-X Digital Elevation Model Using Least Squares Collocation Method
by Xingdong Shen, Cui Zhou and Jianjun Zhu
Remote Sens. 2023, 15(14), 3695; https://doi.org/10.3390/rs15143695 - 24 Jul 2023
Cited by 4 | Viewed by 1474
Abstract
The TanDEM-X Digital Elevation Model (DEM) is limited by the radar side-view imaging mode, which still has gaps and anomalies that directly affect the application potential of the data. Many methods have been used to improve the accuracy of TanDEM-X DEM, but these [...] Read more.
The TanDEM-X Digital Elevation Model (DEM) is limited by the radar side-view imaging mode, which still has gaps and anomalies that directly affect the application potential of the data. Many methods have been used to improve the accuracy of TanDEM-X DEM, but these algorithms primarily focus on eliminating systematic errors trending over a large area in the DEM, rather than random errors. Therefore, this paper presents the least-squares collocation-based error correction algorithm (LSC-TXC) for TanDEM-X DEM, which effectively eliminates both systematic and random errors, to enhance the accuracy of TanDEM-X DEM. The experimental results demonstrate that TanDEM-X DEM corrected by the LSC-TXC algorithm reduces the root mean square error (RMSE) from 6.141 m to 3.851 m, resulting in a significant improvement in accuracy (by 37.3%). Compared to three conventional algorithms, namely Random Forest, Height Difference Fitting Neural Network and Back Propagation in Neural Network, the presented algorithm demonstrates a reduction in the RMSEs of the corrected TanDEM-X DEMs by 6.5%, 7.6%, and 18.1%, respectively. This algorithm provides an efficient tool for correcting DEMs such as TanDEM-X for a wide range of areas. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Figure 1

Figure 1
<p>Study site and datasets: (<b>a</b>) location of study area (The red rectangles show the location of the study area), (<b>b</b>) Sentinel-2 satellite image, (<b>c</b>) TanDEM-X DEM, (<b>d</b>) terrain slope.</p>
Full article ">Figure 2
<p>Schematic diagram of TanDEM-X/TerraSAR-X DEM Generation.</p>
Full article ">Figure 3
<p>Comparison of the orbital phase error before and after correction, (<b>a</b>) before correction, (<b>b</b>) after correction, (<b>c</b>) Difference Chart.</p>
Full article ">Figure 4
<p>The relationship between terrain features and TanDEM-X DEM error, (<b>a</b>) elevation error map, (<b>b</b>) terrain slope error map (The red line is the trend line), and (<b>c</b>) terrain slope error map.</p>
Full article ">Figure 5
<p>Graph of the relationship between prior covariance and interval distance.</p>
Full article ">Figure 6
<p>LSC-TXC algorithm flowchart.</p>
Full article ">Figure 7
<p>(<b>a</b>) original TanDEM-X DEM, (<b>b</b>) TanDEM-X DEM corrected by LSC-TXC algorithm, and (<b>c</b>) the difference between the corrected TanDEM-X DEM and the original TanDEM-X DEM.</p>
Full article ">Figure 8
<p>Error histogram of TanDEM-X DEM relative to the ICESat-2 point before and after the correction.</p>
Full article ">Figure 9
<p>Local error change trend chart of TanDEM-X DEM before and after correction, (<b>a</b>) Profile 1 error change trend chart, and (<b>b</b>) Profile 2 error change trend chart.</p>
Full article ">Figure 10
<p>Relationship between the TanDEM-X DEM error and terrain slope relative to ICESat-2 data, (<b>a</b>) before correction, and (<b>b</b>) after correction (The red line is the trend line).</p>
Full article ">Figure 11
<p>Influence of terrain slope, relief and elevation grade on the ME of TanDEM-X DEM (before and after the correction) (I: (0–5°) for slope class, (0–30 m) for relief and (300–700 m) for elevation classes; II: (5– 10°) for slope class, (30–60 m) for relief and (700–1100 m) for elevation classes; III: (10–15°) for slope class, (60–90 m) for relief and (1100–1500 m) for elevation classes; IV: (15–20°) for slope class, (90–120 m) for relief and (1500–1900 m) for elevation classes; V: (&gt;20°) for slope class, (&gt;120 m) for relief and (&gt;1900 m) for elevation classes. “S”, “R” and “E” stand for slope, relief, and elevation, respect).</p>
Full article ">Figure 12
<p>Influence of terrain slope, relief and elevation grade on the RMSE of TanDEM-X DEM (before and after correction) (I: (0–5°) for slope class, (0–30 m) for relief and (300–700 m) for elevation classes; II: (5– 10°) for slope class, (30–60 m) for relief and (700–1100 m) for elevation classes; III: (10–15°) for slope class, (60–90 m) for relief and (1100–1500 m) for elevation classes; IV: (15–20°) for slope class, (90–120 m) for relief and (1500–1900 m) for elevation classes; V: (&gt;20°) for slope class, (&gt;120 m) for relief and (&gt;1900 m) for elevation classes. “S”, “R” and “E” stand for slope, relief, and elevation, respect).</p>
Full article ">Figure 13
<p>(<b>a</b>) ME and (<b>b</b>) RMSE of TanDEM-X DEM under different land uses before and after correction. CL: Croplands, FR: Forests, GL: Grasslands, SL: Shrublands, WA: Water area, BL: Built-up lands.</p>
Full article ">
22 pages, 7826 KiB  
Article
An Improved VMD-LSTM Model for Time-Varying GNSS Time Series Prediction with Temporally Correlated Noise
by Hongkang Chen, Tieding Lu, Jiahui Huang, Xiaoxing He, Kegen Yu, Xiwen Sun, Xiaping Ma and Zhengkai Huang
Remote Sens. 2023, 15(14), 3694; https://doi.org/10.3390/rs15143694 - 24 Jul 2023
Cited by 8 | Viewed by 2509
Abstract
GNSS time series prediction plays a significant role in monitoring crustal plate motion, landslide detection, and the maintenance of the global coordinate framework. Long short-term memory (LSTM) is a deep learning model that has been widely applied in the field of high-precision time [...] Read more.
GNSS time series prediction plays a significant role in monitoring crustal plate motion, landslide detection, and the maintenance of the global coordinate framework. Long short-term memory (LSTM) is a deep learning model that has been widely applied in the field of high-precision time series prediction and is often combined with Variational Mode Decomposition (VMD) to form the VMD-LSTM hybrid model. To further improve the prediction accuracy of the VMD-LSTM model, this paper proposes a dual variational modal decomposition long short-term memory (DVMD-LSTM) model to effectively handle noise in GNSS time series prediction. This model extracts fluctuation features from the residual terms obtained after VMD decomposition to reduce the prediction errors associated with residual terms in the VMD-LSTM model. Daily E, N, and U coordinate data recorded at multiple GNSS stations between 2000 and 2022 were used to validate the performance of the proposed DVMD-LSTM model. The experimental results demonstrate that, compared to the VMD-LSTM model, the DVMD-LSTM model achieves significant improvements in prediction performance across all measurement stations. The average RMSE is reduced by 9.86% and the average MAE is reduced by 9.44%; moreover, the average R2 increased by 17.97%. Furthermore, the average accuracy of the optimal noise model for the predicted results is improved by 36.50%, and the average velocity accuracy of the predicted results is enhanced by 33.02%. These findings collectively attest to the superior predictive capabilities of the DVMD-LSTM model, thereby demonstrating the reliability of the predicted results. Full article
(This article belongs to the Special Issue Advances in GNSS for Time Series Analysis)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Basic structure of LSTM.</p>
Full article ">Figure 2
<p>DVMD-LSTM hybrid model prediction process.</p>
Full article ">Figure 3
<p>Distribution map of each GNSS station.</p>
Full article ">Figure 4
<p>Three-direction interpolation comparison chart of GBOS station.</p>
Full article ">Figure 5
<p>Prediction results of each IMF and residual terms under different models after VMD decomposition in the U direction of the SEDR station (the black curve represents the original data as well as the IMF components and residual terms obtained from VMD decomposition. The red curve represents the prediction results of IMF components using the DFVMD-LSTM and VMD-LSTM models, the blue curve represents the prediction results of residual terms using the VMD-LSTM model, and the green curve represents the prediction results of residual terms using the DVMD-LSTM model).</p>
Full article ">Figure 6
<p>Comparison of prediction results and prediction error R in three directions of the SEDR station under different models (sub-figures (<b>a</b>–<b>c</b>) are the prediction results of each model and sub-figures (<b>d</b>–<b>f</b>) are comparison diagrams of the prediction error R of each model).</p>
Full article ">Figure A1
<p>ALBH station data distribution.</p>
Full article ">Figure A2
<p>BURN station data distribution.</p>
Full article ">Figure A3
<p>CEDA station data distribution.</p>
Full article ">Figure A4
<p>FOOT station data distribution.</p>
Full article ">Figure A5
<p>GOBS station data distribution.</p>
Full article ">Figure A6
<p>RHCL station data distribution.</p>
Full article ">Figure A7
<p>SEDR station data distribution.</p>
Full article ">Figure A8
<p>SMEL station data distribution.</p>
Full article ">
23 pages, 16640 KiB  
Article
A Super-Resolution Algorithm Based on Hybrid Network for Multi-Channel Remote Sensing Images
by Zhen Li, Wenjuan Zhang, Jie Pan, Ruiqi Sun and Lingyu Sha
Remote Sens. 2023, 15(14), 3693; https://doi.org/10.3390/rs15143693 - 24 Jul 2023
Cited by 3 | Viewed by 1511
Abstract
In recent years, the development of super-resolution (SR) algorithms based on convolutional neural networks has become an important topic in enhancing the resolution of multi-channel remote sensing images. However, most of the existing SR models suffer from the insufficient utilization of spectral information, [...] Read more.
In recent years, the development of super-resolution (SR) algorithms based on convolutional neural networks has become an important topic in enhancing the resolution of multi-channel remote sensing images. However, most of the existing SR models suffer from the insufficient utilization of spectral information, limiting their SR performance. Here, we derive a novel hybrid SR network (HSRN) which facilitates the acquisition of joint spatial–spectral information to enhance the spatial resolution of multi-channel remote sensing images. The main contributions of this paper are three-fold: (1) in order to sufficiently extract the spatial–spectral information of multi-channel remote sensing images, we designed a hybrid three-dimensional (3D) and two-dimensional (2D) convolution module which can distill the nonlinear spectral and spatial information simultaneously; (2) to enhance the discriminative learning ability, we designed the attention structure, including channel attention, before the upsampling block and spatial attention after the upsampling block, to weigh and rescale the spectral and spatial features; and (3) to acquire fine quality and clear texture for reconstructed SR images, we introduced a multi-scale structural similarity index into our loss function to constrain the HSRN model. The qualitative and quantitative comparisons were carried out in comparison with other SR methods on public remote sensing datasets. It is demonstrated that our HSRN outperforms state-of-the-art methods on multi-channel remote sensing images. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The architecture of the adopted residual channel attention block (RCAB).</p>
Full article ">Figure 2
<p>The flowchart of our HSRN model for multi-channel images.</p>
Full article ">Figure 3
<p>The architecture of the hybrid 3D–2D module.</p>
Full article ">Figure 4
<p>The schematic of the sub-pixel upsampling block.</p>
Full article ">Figure 5
<p>The architecture of the residual spatial attention block (RSAB).</p>
Full article ">Figure 6
<p>The loss function and corresponding PSNR of HSRN on the SEN12MS-CR dataset with an upsampling factor of 2 over 1000 epochs.</p>
Full article ">Figure 7
<p>The reconstruction maps and spectral curves of the super-resolution models on the seaside areas from the AVIRIS dataset with a scale factor ×4.</p>
Full article ">Figure 8
<p>The reconstruction maps and spectral curves of the super-resolution models on the mountain areas from the AVIRIS dataset with a scale factor ×4.</p>
Full article ">Figure 9
<p>The reconstruction maps and spectral curves of SR models on the river areas from the SEN12MS-CR dataset with a scale factor ×8.</p>
Full article ">Figure 10
<p>The reconstruction maps and spectral curves of SR models on the urban areas from the SEN12MS-CR dataset with a scale factor ×8.</p>
Full article ">Figure 11
<p>The reconstruction maps and spectral curves of the SR models on the forest areas from the WHU Building dataset with a scale factor ×4.</p>
Full article ">Figure 12
<p>PSNR (dB) and SSIM comparisons under different coefficients of loss function. The results were calculated on the SEN12MS-CR dataset with a scaling factor ×4 over 200 epochs.</p>
Full article ">Figure 13
<p>Effect of HSRN with different convolutional structures. The curves are based on the PSNR (dB) on the SEN12MS-CR dataset with an upsampling factor of ×4 over 1000 epochs.</p>
Full article ">
27 pages, 8386 KiB  
Article
Towards a Guideline for UAV-Based Data Acquisition for Geomorphic Applications
by Dipro Sarkar, Rajiv Sinha and Bodo Bookhagen
Remote Sens. 2023, 15(14), 3692; https://doi.org/10.3390/rs15143692 - 24 Jul 2023
Cited by 3 | Viewed by 1951
Abstract
Recent years have seen a rapid rise in the generation of high-resolution topographic data using custom-built or commercial-grade Unmanned Aerial Vehicles (UAVs). Though several studies have demonstrated the application potential of UAV data, significant knowledge gaps still exist in terms of proper documentation [...] Read more.
Recent years have seen a rapid rise in the generation of high-resolution topographic data using custom-built or commercial-grade Unmanned Aerial Vehicles (UAVs). Though several studies have demonstrated the application potential of UAV data, significant knowledge gaps still exist in terms of proper documentation of protocols for data acquisition, post-flight data processing, error assessments, and their mitigation. This work documents and provides guidelines for UAV data acquisition and processing from several years of field experience in diverse geomorphic settings across India, including undulating topography (~17 km2), alluvial plains (~142 km2), lowland-river basin (~66 km2), and a highly urbanized area (~5 km2). A total of 37,065 images with 16 and 20 Megapixels and 604 ground control points (GCPs) were captured with multiple UAV systems and processed to generate point clouds for a total area of ~230 km2. The Root Mean Square Error (RMSE) for each GCP for all sites ranged from 6.41 cm to 36.54 cm. This manuscript documents a comprehensive guideline for (a) pre-field flight planning and data acquisition, (b) generation and removal of noise and errors of the point cloud, and (c) generation of orthoimages and digital elevation models. We demonstrate that a well-distributed and not necessarily uniformly distributed GCP placement can significantly reduce doming error and other artifacts. We emphasize the need for using separate camera calibration parameters for each flight and demonstrate that errors in camera calibration can significantly impact the accuracy of the point cloud. Accordingly, we have evaluated the stability of lens calibration parameters between consumer-grade and professional cameras and have suggested measures for noise removal in the point cloud data. We have also identified and analyzed various errors during point cloud processing. These include systematic doming errors, errors during orthoimage and DEM generation, and errors related to water bodies. Mitigation strategies for various errors have also been discussed. Finally, we have assessed the accuracy of our point cloud data for different geomorphic settings. We concluded that the accuracy is influenced by Ground Sampling Distance (GSD), topographic features, and the placement, density, and distribution of GCPs. This guideline presented in this paper can be extremely beneficial to both experienced long-term users and newcomers for planning the UAV-based topographic survey and processing the data acquired. Full article
Show Figures

Figure 1

Figure 1
<p>Study areas for UAV data acquisition and processing in the area of the Indus-Gangetic Plain in India: (<b>1</b>) Mandsaur, Madhya Pradesh, (<b>2</b>) Mayurbhanj, Odisha, (<b>3</b>) Kawardha, Chhattisgarh, and (<b>4</b>) Anpara, Uttar Pradesh.</p>
Full article ">Figure 2
<p>Conceptual framework illustrating the steps discussed and presented in the guideline document.</p>
Full article ">Figure 3
<p>A comparison of the spread of GCPs as planned (red dots) and those acquired in the field setting (black dots). The disparity will always exist because many pre-planned points may not be reachable. Point locations serve as guidance to ensure homogenous point distribution. The greatest distance between two GCPs in this instance (Mayurbhanj) is always less than 1 km. For large-area mapping, maintaining a maximum distance between numerous GCPs is only possible with prior planning. For this work, we downloaded the generated random points to our mobile devices, then utilized mobile GIS apps, such as GPX viewer or Google Earth (available for download from the Playstore or iOS app store), to locate and observe the closest likely GCP site.</p>
Full article ">Figure 4
<p>(<b>a</b>) Distribution of vertical difference (dz) between point clouds P-A (with <span class="html-italic">n</span> = 40 GCPs) and P-B (without GCPs) for the Mandsaur (1) site. The peripheral sections are concaving upward (i.e., higher elevations), while the center portion is convergent (lower elevations) compared to a reference point cloud. This effect is referred to as a dishing effect (doming has higher elevations in the center of the point cloud). (<b>b</b>) Histogram of elevation differences (dZ) calculated from point-to-point distances using closest points between the clouds.</p>
Full article ">Figure 5
<p>(<b>a</b>) Distribution of vertical difference (dZ) between the DEM D1 and D2 for the Mandsaur site (1). (<b>b</b>) Distribution of vertical difference between the DEM D3 and D2, (<b>c</b>) dZ plotting of D1 and D2, which are mostly within 0.5 m, and (<b>d</b>) dZ plotting of D3 and D2 with a higher dZ value.</p>
Full article ">Figure 6
<p>Gridded raster based on the cloud-to-cloud distance and the tilt when GCPs are moved (<b>a</b>) Difference (dZ) between P-A and original Mandsaur point cloud. (<b>b</b>) Difference (dZ) between P-A and P-B (<b>c</b>) dZ profile shows a tilting of the area towards SE.</p>
Full article ">Figure 7
<p>(<b>a</b>) DEMs generated by AMP and Blast2DEM for a part of the Kawardha (4) area. (<b>b</b>) Edges created in the DEMs (see description in the text). (<b>c</b>) Comparison profile of DEMs generated from the same point cloud from AMP and Blast2DEM. Note the step-sized features created by the interpolation employed by AMP.</p>
Full article ">Figure 8
<p>Two examples of blurred edges: (<b>a</b>) Structural edges and (<b>b</b>) trees created in the orthoimages with downscaled images used to generate the point cloud. Downscaling images results in the formation of jagged or smudged edges. Downscaling is step scaling in x and y directions. A 1:2 ratio indicates that the image is downscaled four times (two times each in x and y directions).</p>
Full article ">Figure 9
<p>An example from Kawardha (3) wherein the difference between the GCP elevation and DSM is calculated to find the nearest point reaching the GCP altitude. (<b>a</b>) GCP position on the terrace on an individual image, and the red dot represents the corner of the terrace, (<b>b</b>) DSM generated and the profile line (red), (<b>c</b>) the cross profile of the profile line generated over DSM (brown). Observe that the position of the red dot moved to the ground as the edge moved by approximately 1.48 m. The horizontal and vertical red lines trace the position of the cursor on the horizontal and vertical plane (a red dot in (<b>a</b>) indicates the terrace corner). (<b>d</b>) Top view of the corner with superimposed elevation differences that highlights the difference between the actual corner and the generated corner.</p>
Full article ">Figure 10
<p>Generated noise over water and statistics for confidence for the Anpara (5) site. (<b>a</b>) Top view and (<b>b</b>) side view (inset area) before cleaning based on image pairs (called confidence in AMP). (<b>c</b>) Top view and (<b>d</b>) side view (inset area) after filtering.</p>
Full article ">Figure 11
<p>RSMEz (in cm) plotted from validation GCPs (N) for different study areas. Blue line represents the mean.</p>
Full article ">
1 pages, 161 KiB  
Correction
Correction: Lei et al. Three-Dimensional Surface Deformation Characteristics Based on Time Series InSAR and GPS Technologies in Beijing, China. Remote Sens. 2021, 13, 3964
by Kunchao Lei, Fengshan Ma, Beibei Chen, Yong Luo, Wenjun Cui, Yi Zhou, He Liu and Te Sha
Remote Sens. 2023, 15(14), 3691; https://doi.org/10.3390/rs15143691 - 24 Jul 2023
Viewed by 724
Abstract
In the published article [...] Full article
17 pages, 6554 KiB  
Article
Biophysical Variable Retrieval of Silage Maize with Gaussian Process Regression and Hyperparameter Optimization Algorithms
by Elahe Akbari, Ali Darvishi Boloorani, Jochem Verrelst, Stefano Pignatti, Najmeh Neysani Samany, Saeid Soufizadeh and Saeid Hamzeh
Remote Sens. 2023, 15(14), 3690; https://doi.org/10.3390/rs15143690 - 24 Jul 2023
Cited by 1 | Viewed by 1391
Abstract
Quantification of vegetation biophysical variables such as leaf area index (LAI), fractional vegetation cover (fCover), and biomass are among the key factors across hydrological, agricultural, and irrigation management studies. The present study proposes a kernel-based machine learning algorithm capable of performing adaptive and [...] Read more.
Quantification of vegetation biophysical variables such as leaf area index (LAI), fractional vegetation cover (fCover), and biomass are among the key factors across hydrological, agricultural, and irrigation management studies. The present study proposes a kernel-based machine learning algorithm capable of performing adaptive and nonlinear data fitting so as to generate a suitable, accurate, and robust algorithm for spatio-temporal estimation of the three mentioned variables using Sentinel-2 images. To this aim, Gaussian process regression (GPR)–particle swarm optimization (PSO), GPR–genetic algorithm (GA), GPR–tabu search (TS), and GPR–simulated annealing (SA) hyperparameter-optimized algorithms were developed and compared against kernel-based machine learning regression algorithms and artificial neural network (ANN) and random forest (RF) algorithms. The accuracy of the proposed algorithms was assessed using digital hemispherical photography (DHP) data and destructive measurements performed during the growing season of silage maize in agricultural fields of Ghale-Nou, southern Tehran, Iran, in the summer of 2019. The results on biophysical variables against validation data showed that the developed GPR-PSO algorithm outperformed other algorithms under study in terms of robustness and accuracy (0.917, 0.931, 0.882 using R2 and 0.627, 0.078, and 1.99 using RMSE in LAI, fCover, and biomass of Sentinel-2 20 m, respectively). GPR-PSO also possesses the unique ability to generate pixel-based uncertainty maps (confidence level) for prediction purposes (i.e., estimated uncertainty level <0.7 in LAI, fCover, and biomass, for 96%, 98%, and 71% of the total study area, respectively). Altogether, GPR-PSO appears to be the most suitable option for mapping biophysical variables at the local scale using Sentinel-2 images. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>,<b>b</b>) illustrate the study area in Iran and Tehran Province, respectively; (<b>c</b>) the location of the study area (Ghale-Nou County) and the location of field data collection (i.e., ESUs).</p>
Full article ">Figure 2
<p>Field sampling according to the silage maize phenology.</p>
Full article ">Figure 3
<p>Samples of DHP taken in ESUs at different phenology stages of silage maize.</p>
Full article ">Figure 4
<p>Flowchart of LAI, fCover, and Biomass estimations using kernel-based MRLA methods, RF, ANN and developed GPR-SA, GPR-TS, GPR-GA and GPR-PSO. ESU, LAI, GPR, SA, TS, GA, PSO, RF, ANN and k-fold C-V in the flowchart are abbreviations of elementary sampling unit (ESU), leaf area index (LAI), gaussian process regression (GPR), simulated annealing (SA), tabu search (TS), genetic algorithm (GA), particle swarm optimization (PSO), random forest (RF), artificial neural network (ANN), and k-fold cross-validation (k-fold C-V).</p>
Full article ">Figure 5
<p>Flowchart of GPR-SA (<b>a</b>), GPR-TS (<b>b</b>), GPR-GA (<b>c</b>), and GPR-PSO (<b>d</b>) algorithms to model biophysical variables.</p>
Full article ">Figure 5 Cont.
<p>Flowchart of GPR-SA (<b>a</b>), GPR-TS (<b>b</b>), GPR-GA (<b>c</b>), and GPR-PSO (<b>d</b>) algorithms to model biophysical variables.</p>
Full article ">Figure 6
<p>Assessment of kernel-based MLRAs and developed algorithms compared to RF and ANN in LAI, fCover, and biomass estimation (in 10 and 20 m band groups) by R<sup>2</sup> <sub>C-V</sub> (mean and standard deviation) (<b>a</b>,<b>b</b>), respectively, RMSE<sub>C-V</sub> (<b>c</b>), and Runtime (<b>d</b>). The RMSE units for LAI, fCover, and biomass were m<sup>2</sup>/m<sup>2</sup>, %, and ton/ha, respectively. R<sup>2</sup><sub>C-V</sub>-(µ and σ) and RMSE<sub>C-V</sub>-(µ) were calculated using µ and σ of results in C-V mode (cross-validation).</p>
Full article ">Figure 7
<p>Significance values (length scale (σ)) of S2 bands in biophysical variable extraction generated by the GPR model: the lower the length scale (σ), the more significant the band.</p>
Full article ">Figure 8
<p>LAI pixel-based map (<b>a</b>), uncertainty (SD) (<b>b</b>), and CV (<b>c</b>); fCover pixel-based map (<b>d</b>), uncertainty (SD) (<b>e</b>), and CV (<b>f</b>); and Biomass pixel-based map (<b>g</b>), uncertainty (SD) (<b>h</b>), and CV (<b>i</b>) using GPR-PSO in the S2 20 m band group (26 August 2019). Red circles in figures (<b>b</b>,<b>e</b>,<b>h</b>) mark the position of areas with SD &gt; 0.7. See <a href="#remotesensing-15-03690-f001" class="html-fig">Figure 1</a> for the geo-information.</p>
Full article ">Figure 9
<p>Pixel-based spatio-temporal variations in fCover (%) calculated using the time series satellite images (taken respectively on 12 July, 27 July, 11 August, 26 August, 5 September, and 20 September, which are only some of the sampling dates). In subimage 6 of <a href="#remotesensing-15-03690-f009" class="html-fig">Figure 9</a>, the zoomed field is reaped, resulting in decreased fCover values. See <a href="#remotesensing-15-03690-f001" class="html-fig">Figure 1</a> for the geo-information.</p>
Full article ">Figure 9 Cont.
<p>Pixel-based spatio-temporal variations in fCover (%) calculated using the time series satellite images (taken respectively on 12 July, 27 July, 11 August, 26 August, 5 September, and 20 September, which are only some of the sampling dates). In subimage 6 of <a href="#remotesensing-15-03690-f009" class="html-fig">Figure 9</a>, the zoomed field is reaped, resulting in decreased fCover values. See <a href="#remotesensing-15-03690-f001" class="html-fig">Figure 1</a> for the geo-information.</p>
Full article ">Figure 10
<p>Pixel-based spatio-temporal variations in Biomass (%) calculated using the time series satellite images (taken respectively on 12 July, 27 July, 11 August, 26 August, 5 September, and 20 September, which are some of the sampling dates). In subimage 6 of <a href="#remotesensing-15-03690-f010" class="html-fig">Figure 10</a>, the zoomed field is reaped, resulting in decreased Biomass values. See <a href="#remotesensing-15-03690-f001" class="html-fig">Figure 1</a> for the geo-information.</p>
Full article ">
21 pages, 7816 KiB  
Article
Spatio-Temporal Distribution Characteristics of Glacial Lakes in the Altai Mountains with Climate Change from 2000 to 2020
by Nan Wang, Tao Zhong, Jianghua Zheng, Chengfeng Meng and Zexuan Liu
Remote Sens. 2023, 15(14), 3689; https://doi.org/10.3390/rs15143689 - 24 Jul 2023
Cited by 3 | Viewed by 1565
Abstract
The evolution of a glacial lake is a true reflection of glacial and climatic change. Currently, the study of glacial lakes in the Altai Mountains is mainly concerned with the application of high-resolution remote sensing images to monitor and evaluate the potential hazards [...] Read more.
The evolution of a glacial lake is a true reflection of glacial and climatic change. Currently, the study of glacial lakes in the Altai Mountains is mainly concerned with the application of high-resolution remote sensing images to monitor and evaluate the potential hazards of glacial lakes. At present, there is no rapid and large-scale method to monitor the dynamical variation in glacial lakes in the Altai Mountains, and there is little research on predicting its future tendency. Based on the supervised classification results obtained by Google Earth Engine (GEE), combined with an analysis of meteorological data, we analyzed the spatial and temporal variations in glacial lakes in the Altai Mountains between 2000 and 2020, and used the MCE-CA-Markov model to predict their changes in the future. According to the results, as of 2020, there are 3824 glacial lakes in the Altai Mountains, with an area of 682.38 km2. Over the entire period, the glacial lake quantity growth rates and area were 47.82% and 17.07%, respectively. The distribution of glacial lakes in this region showed a larger concentration in the north than in the south. Most glacial lakes had areas smaller than 0.1 km2, and there was minimal change observed in glacial lakes larger than 0.2 km2. Analyzing the regional elevation in 100 m intervals, the study found that glacial lakes were predominantly distributed at elevations from 2000 m to 3000 m. Interannual rainfall and temperature fluctuations in the Altai Mountains have slowed since 2014, and the trends for the area and number of glacial lakes have stabilized. The growth of glacial lakes in both number and surface area is expected to continue through 2025 and 2030, although the pace of change will slow. In the context of small increases in precipitation and large increases in temperature, in the future, glacial lakes with faster surface area growth rates will be located primarily in the southern Altai Mountains. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and distribution of glaciers and glacial lakes in 2020.</p>
Full article ">Figure 2
<p>The number of images used in this study.</p>
Full article ">Figure 3
<p>The total number of glacial lakes, and the area within the study area, from 2000 to 2020.</p>
Full article ">Figure 4
<p>Glacial lake inventories from various parts of the study area in 2000, 2010, and 2020: (<b>a</b>) a typical area in the northern Altai Mountains; (<b>b</b>) a typical area in the southern Altai Mountains.</p>
Full article ">Figure 5
<p>Area and quantity of glacier lakes in different size classes: (<b>a</b>) glacial lake areas during the 21 years studied; (<b>b</b>) glacial lake numbers in the 21 years studied.</p>
Full article ">Figure 6
<p>The altitudinal distribution of glacial lakes, and changes from 2000 to 2020: (<b>a</b>) area distributions and changes in glacial lakes; (<b>b</b>) quantity distributions and changes in glacial lakes.</p>
Full article ">Figure 7
<p>Changes in temperature and precipitation in the Altai Mountains from 1990 to 2020. The linear regression is represented by the blue line. The moving mean is represented by the red curve in the graph.</p>
Full article ">Figure 8
<p>(<b>a</b>) The spatial trends in precipitation across the study area; (<b>b</b>) the results of the Mann–Kendall test.</p>
Full article ">Figure 9
<p>(<b>a</b>) Spatial temperature trends across the study region; (<b>b</b>) the results of the Mann–Kendall test.</p>
Full article ">Figure 10
<p>Map of Sankey land-cover transfers in 2000–2010, 2010–2015, and 2015–2020.</p>
Full article ">Figure 11
<p>Prediction of the spatial distribution of land cover in the Altai Mountains in 2020, 2025, and 2030.</p>
Full article ">
16 pages, 8601 KiB  
Technical Note
Uncertainty Evaluation on Temperature Detection of Middle Atmosphere by Rayleigh Lidar
by Xinqi Li, Kai Zhong, Xianzhong Zhang, Tong Wu, Yijian Zhang, Yu Wang, Shijie Li, Zhaoai Yan, Degang Xu and Jianquan Yao
Remote Sens. 2023, 15(14), 3688; https://doi.org/10.3390/rs15143688 - 24 Jul 2023
Cited by 2 | Viewed by 956
Abstract
Measurement uncertainty is an extremely important parameter for characterizing the quality of measurement results. In order to measure the reliability of atmospheric temperature detection, the uncertainty needs to be evaluated. In this paper, based on the measurement models originating from the Chanin-Hauchecorne (CH) [...] Read more.
Measurement uncertainty is an extremely important parameter for characterizing the quality of measurement results. In order to measure the reliability of atmospheric temperature detection, the uncertainty needs to be evaluated. In this paper, based on the measurement models originating from the Chanin-Hauchecorne (CH) method, the atmospheric temperature uncertainty was evaluated using the Guide to the Expression of Uncertainty in Measurement (GUM) and the Monte Carlo Method (MCM) by considering the ancillary temperature uncertainty and the detection noise as the major uncertainty sources. For the first time, the GUM atmospheric temperature uncertainty framework was comprehensively and quantitatively validated by MCM following the instructions of JCGM 101: 2008 GUM Supplement 1. The results show that the GUM method is reliable when discarding the data in the range of 10–15 km below the reference altitude. Compared with MCM, the GUM method is recommended to evaluate the atmospheric temperature uncertainty of Rayleigh lidar detection in terms of operability, reliability, and calculation efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the Adaptive Monte Carlo method.</p>
Full article ">Figure 2
<p>Uncertainty owing to the ancillary temperature uncertainty with ideal return photon counts (no noise).</p>
Full article ">Figure 3
<p>Absolute deviation of the coverage interval endpoints in evaluating uncertainty owing to the auxiliary temperature uncertainty: (<b>a</b>) 30–59.9 km; (<b>b</b>) 30–36.9 km (<math display="inline"><semantics><mi>δ</mi></semantics></math> = 0.05); (<b>c</b>) 37–50 km (<math display="inline"><semantics><mi>δ</mi></semantics></math> = 0.5).</p>
Full article ">Figure 4
<p>Uncertainty owing to detection noise (with only Poisson noise).</p>
Full article ">Figure 5
<p>Absolute deviation of the coverage interval endpoints in evaluating uncertainty owing to the detection noise: (<b>a</b>) 30–59.9 km; (<b>b</b>) 30–37.1 km (<math display="inline"><semantics><mi>δ</mi></semantics></math> = 0.05); (<b>c</b>) 37.2–50 km (<math display="inline"><semantics><mi>δ</mi></semantics></math> = 0.5).</p>
Full article ">Figure 6
<p>Temperature combined standard uncertainty.</p>
Full article ">Figure 7
<p>Absolute deviation of the coverage interval endpoints in evaluating uncertainty owing to both the auxiliary temperature uncertainty and detection noise: (<b>a</b>) 30–59.9 km; (<b>b</b>) 30–34.8 km (<math display="inline"><semantics><mi>δ</mi></semantics></math> = 0.05); (<b>c</b>) 34.9–50 km (<math display="inline"><semantics><mi>δ</mi></semantics></math> = 0.5).</p>
Full article ">Figure 8
<p>Absolute deviation of the coverage interval endpoints when the integration time was 60 s: (<b>a</b>) auxiliary temperature uncertainty; (<b>b</b>) detection noise; (<b>c</b>) combined standard uncertainty. The altitude range was 30–50 km and the insets show the GUM method can be validated in the range of 30–40 km.</p>
Full article ">Figure 9
<p>Absolute deviation of the coverage interval endpoints when the integration time was 420 s: (<b>a</b>) auxiliary temperature uncertainty; (<b>b</b>) detection noise; (<b>c</b>) combined standard uncertainty. The altitude range was 30–60 km and the insets show the GUM method can be validated in the range of 30–50 km.</p>
Full article ">Figure 10
<p>Absolute deviation of the coverage interval endpoints when the integration time was 2400 s: (<b>a</b>) auxiliary temperature uncertainty; (<b>b</b>) detection noise; (<b>c</b>) combined standard uncertainty. The altitude range was 30–70 km and the insets show the GUM method can be validated in the range of 30–60 km.</p>
Full article ">Figure 11
<p>Atmospheric temperature uncertainty evaluation by the GUM method: (<b>a</b>) auxiliary temperature uncertainty; (<b>b</b>) detection noise; (<b>c</b>) combined standard uncertainty (<b>d</b>) All in one.</p>
Full article ">Figure 12
<p>Comparison of the uncertainty of atmospheric temperature evaluated by GUM and MCM: (<b>a</b>–<b>c</b>) uncertainty results; (<b>d</b>–<b>f</b>) difference between two methods. Three columns from left to right indicate the uncertainty related to auxiliary temperature uncertainty, detection noise, and combined standard uncertainty.</p>
Full article ">Figure 13
<p>Absolute deviation of the coverage interval endpoints: (<b>a</b>) auxiliary temperature uncertainty; (<b>b</b>) detection noise; (<b>c</b>) combined standard uncertainty. The altitude range was 40–80 km and the insets show the GUM method can be validated in the range of 40–70 km.</p>
Full article ">
15 pages, 7818 KiB  
Article
Measuring Vertical Urban Growth of Patna Urban Agglomeration Using Persistent Scatterer Interferometry SAR (PSInSAR) Remote Sensing
by Aniket Prakash, Diksha and Amit Kumar
Remote Sens. 2023, 15(14), 3687; https://doi.org/10.3390/rs15143687 - 24 Jul 2023
Viewed by 1839
Abstract
In the present study, the vertical and horizontal growth of Patna Urban Agglomeration was evaluated using the Persistent Scatterer Interferometry Synthetic Aperture Radar (PSInSAR) technique during 2015–2018. The vertical urban growth assessment of the city landscape was assessed using microwave time series (30 [...] Read more.
In the present study, the vertical and horizontal growth of Patna Urban Agglomeration was evaluated using the Persistent Scatterer Interferometry Synthetic Aperture Radar (PSInSAR) technique during 2015–2018. The vertical urban growth assessment of the city landscape was assessed using microwave time series (30 temporal) datasets of Single Look Complex (SLC) Sentinel-1A interferometric Synthetic Aperture Radar using SARPROZ software (ver. 2020). This study demonstrated that peripheral city regions experienced higher vertical growth (~4 m year−1) compared to the city core regions, owing to higher urban development opportunities leading to significant land use alterations, the development of high-rise buildings, and infrastructural development. While the city core of Patna observed an infill and densification process, as it was already saturated and highly densified. The rapidly urbanizing city in the developing region witnessed a considerable horizontal urban expansion as estimated through the normalized difference index for built-up areas (NDIB) and speckle divergence (SD) using optical Sentinel 2A and microwave Sentinel-1A ground range detected (GRD) satellite data, respectively. The speckle divergence-based method exhibited high urban growth (net growth of 11.28 km2) with moderate urban infill during 2015–2018 and reported a higher accuracy as compared to NDIB. This study highlights the application of SAR remote sensing for precise urban area delineation and temporal monitoring of urban growth considering horizontal and vertical expansion through processing a long series of InSAR datasets that provide valuable information for informed decision-making and support the development of sustainable and resilient cities. Full article
(This article belongs to the Special Issue SAR Processing in Urban Planning)
Show Figures

Figure 1

Figure 1
<p>Study area map of Patna City, located in the eastern parts of India along the south bank of River Ganga.</p>
Full article ">Figure 2
<p>(<b>a</b>) False color composite (FCC) of Sentinel-2A satellite data, (<b>b</b>) normalized difference index for built-up area, and (<b>c</b>) speckle divergence-based built-up area dated January 15, 2015; (<b>d</b>) FCC of Sentinel-2A, (<b>e</b>) normalized difference index for built-up area, and (<b>f</b>) speckle divergence-based built-up area dated 25 December 2018.</p>
Full article ">Figure 3
<p>PSInSAR-based vertical growth mapping of Patna Urban Agglomeration (2015–2018).</p>
Full article ">Figure 4
<p>Overlay of vertical growth map estimated by PInSAR (2015–2018) on Google Earth.</p>
Full article ">Figure 5
<p>(<b>a</b>) Overview of vertical growth estimated by PSInSAR in Patna City, (<b>b1</b>) the Google Earth image showing built-up area spread in Kankarbagh locality in the year 2015, (<b>b2</b>) the corresponding image for the year 2018, (<b>c1</b>) the Google Earth image showing built-up area spread in Kumhrar locality in the year 2015, and (<b>c2</b>) the corresponding image for the year 2018 highlighting urban infill (densification) and vertical growth.</p>
Full article ">Figure 6
<p>A field photograph representing the built-up area spread in parts of Patna Urban Agglomeration.</p>
Full article ">
16 pages, 1092 KiB  
Article
Depth Information Precise Completion-GAN: A Precisely Guided Method for Completing Ill Regions in Depth Maps
by Ren Qian, Wenfeng Qiu, Wenbang Yang, Jianhua Li, Yun Wu, Renyang Feng, Xinan Wang and Yong Zhao
Remote Sens. 2023, 15(14), 3686; https://doi.org/10.3390/rs15143686 - 24 Jul 2023
Viewed by 1132
Abstract
In the depth map obtained through binocular stereo matching, there are many ill regions due to reasons such as lighting or occlusion. These ill regions cannot be accurately obtained due to the lack of information required for matching. Since the completion model based [...] Read more.
In the depth map obtained through binocular stereo matching, there are many ill regions due to reasons such as lighting or occlusion. These ill regions cannot be accurately obtained due to the lack of information required for matching. Since the completion model based on Gan generates random results, it cannot accurately complete the depth map. Therefore, it is necessary to accurately complete the depth map according to reality. To address this issue, this paper proposes a depth information precise completion GAN (DIPC-GAN) that effectively uses the Guid layer normalization (GuidLN) module to guide the model for precise completion by utilizing depth edges. GuidLN flexibly adjusts the weights of the guiding conditions based on intermediate results, allowing modules to accurately and effectively incorporate the guiding information. The model employs multiscale discriminators to discriminate results of different resolutions at different generator stages, enhancing the generator’s grasp of overall image and detail information. Additionally, this paper proposes Attention-ResBlock, which enables all ResBlocks in each task module of the GAN-based multitask model to focus on their own task by sharing a mask. Even when the ill regions are large, the model can effectively complement the missing details in these regions. Additionally, the multiscale discriminator in the model enhances the generator’s robustness. Finally, the proposed task-specific residual module can effectively focus different subnetworks of a multitask model on their respective tasks. The model has shown good repair results on datasets, including artificial, real, and remote sensing images. The final experimental results showed that the model’s REL and RMSE decreased by 9.3% and 9.7%, respectively, compared to RDFGan. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>In the figure, (<b>a</b>) is the satellite reference image, and (<b>b</b>) is the depth map of the reference image, which contains a large number of ill regions, including reflective rivers, lakes, areas blocked by buildings and trees, etc., which are marked as black. (<b>c</b>) is the depth map of the repaired image by the RDFGAN model, and (<b>d</b>) is the depth map of the repaired image by our model. It can be seen that both the reflective rivers and lakes and the occlusion areas caused by buildings and trees have been well repaired.</p>
Full article ">Figure 2
<p>The overall structure of the model.</p>
Full article ">Figure 3
<p>The structure of Attention-ResBlock.</p>
Full article ">Figure 4
<p>The structure of Guid normalization.</p>
Full article ">Figure 5
<p>The visualization results of each algorithm on the SceneFlow dataset, where (<b>a</b>) is the ground truth, (<b>b</b>) is the disparity map after generating ill regions randomly, (<b>c</b>) is the label of the ill regions, (<b>d</b>) is the deep edge map, (<b>e</b>) is the result of the NLSPN model, (<b>f</b>) is the result of the ACMNet model, (<b>g</b>) is the result of the RDFGAN model, and (<b>h</b>) is the result of the model proposed in this paper.</p>
Full article ">Figure 6
<p>In the figure, row (<b>a</b>) is the groundtruth, row (<b>b</b>) is a randomly generated ill region, row (<b>c</b>) is the result diagram of the NLSPN model, row (<b>d</b>) is the result diagram of the RDFGAN model, and row (<b>e</b>) is the result diagram of our model.</p>
Full article ">Figure 7
<p>The (<b>a</b>) column shows a remote sensing image, the (<b>b</b>) column displays the disparity map of randomly generated defective regions, the (<b>c</b>) column exhibits the results of the NLSPN model, the (<b>d</b>) column displays the results of the PRR model, the (<b>e</b>) column shows the results of the UARes model, the (<b>f</b>) column displays the results of the RDFGAN model, and the (<b>g</b>) column shows the results of our algorithm. As the disparity map of remote sensing images is not visually prominent, we magnified the defective regions in this experiment and adjusted the contrast of the images to display the missed details by each model in defective regions more clearly.</p>
Full article ">Figure 8
<p>The (<b>a</b>) column shows a ground truth of disparity map, the (<b>b</b>) column masks a large area of repairable areas, the (<b>c</b>) column exhibits the results of the Ours-DA model, the (<b>d</b>) column displays the results of the Ours-GA model, the (<b>e</b>) column shows the results of the Ours-GD model, the (<b>f</b>) column displays the results of the complete model.</p>
Full article ">
17 pages, 26389 KiB  
Article
Surface Displacement of Hurd Rock Glacier from 1956 to 2019 from Historical Aerial Frames and Satellite Imagery (Livingston Island, Antarctic Peninsula)
by Gonçalo Prates and Gonçalo Vieira
Remote Sens. 2023, 15(14), 3685; https://doi.org/10.3390/rs15143685 - 24 Jul 2023
Cited by 1 | Viewed by 1259
Abstract
In the second half of the 20th century, the western Antarctic Peninsula recorded the highest mean annual air temperature rise in the Antarctic. The South Shetland Islands are located about 100 km northwest of the Antarctic Peninsula. The mean annual air temperature at [...] Read more.
In the second half of the 20th century, the western Antarctic Peninsula recorded the highest mean annual air temperature rise in the Antarctic. The South Shetland Islands are located about 100 km northwest of the Antarctic Peninsula. The mean annual air temperature at sea level in this Maritime Antarctic region is close to −2 °C and, therefore, very sensitive to permafrost degradation following atmospheric warming. Among geomorphological indicators of permafrost are rock glaciers found below steep slopes as a consequence of permafrost creep, but with surficial movement also generated by solifluction and shallow landslides of rock debris and finer sediments. Rock glacier surface velocity is a new essential climate variable parameter by the Global Climate Observing System, and its historical analysis allows insight into past permafrost behavior. Recovery of 1950s aerial image stereo-pairs and structure-from-motion processing, together with the analysis of QuickBird 2007 and Pleiades 2019 high-resolution satellite imagery, allowed inferring displacements of the Hurd rock glacier using compression ridge-and-furrow morphology analysis over 60 years. Displacements measured on the rock glacier surface from 1956 until 2019 were from 7.5 m to 22.5 m and surface velocity of 12 cm/year to 36 cm/year, measured on orthographic images, with combined deviation root-mean-square of 2.5 m and 2.4 m in easting and northing. The inferred surface velocity also provides a baseline reference to assess today’s displacements. The results show patterns of the Hurd rock glacier displacement velocity, which are analogous to those reported within the last decade, without being possible to assess any displacement acceleration. Full article
(This article belongs to the Special Issue Remote Sensing of Cryosphere and Related Processes)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of Hurd Peninsula, within the dashed box, in Livingston Island of the South Shetland Islands about 100 km north of the Antarctic Peninsula (<b>a</b>). Air temperature at Bellingshausen station, King George Island, retrieved from Reference Antarctic Data for Environmental Research (legacy.bas.ac.uk/met/reader/) (accessed on 19 April 2023). Annual (gray), winter (blue), and summer (orange) means in solid line for weighted moving average and dotted line for trend (<b>b</b>). Location of Hurd rock glacier, inside the dashed box, in Hurd Peninsula with digital elevation model from historical aerial frames and REMA outside the SfM computed area (<b>c</b>).</p>
Full article ">Figure 2
<p>Oblique photograph of Hurd rock glacier depicting the eastern valley slope and the ridge and furrows topography, as well as the frontal zone over the Holocene raised beach.</p>
Full article ">Figure 3
<p>Orthographic image of 2007 by QuickBird sensor (©DigitalGlobe Inc., Boulder, Colorado, USA, 2007) with the location of the ground control points used shown in red circles (see <a href="#app1-remotesensing-15-03685" class="html-app">Appendix A</a>) and profile (A-B-C) of measured heights comparison (<b>a</b>). Orthographic image of 2019 by Pleiades sensor (©CNES, Paris, France, 2019, Airbus distribution service) with the location of the twenty-six assessed features and their orthographic images (1956, 1957, 2007, and 2019) coordinates’ common deviation root-mean-squared (rms), varying from 0.9 m to 6.2 m (see <a href="#app2-remotesensing-15-03685" class="html-app">Appendix B</a>) (<b>b</b>).</p>
Full article ">Figure 4
<p>Orthographic image of 1956 and location of the frame center in a red box (<b>a</b>). Orthographic image of 1957 and location of the frame center in a red box (<b>b</b>).</p>
Full article ">Figure 5
<p>Initial point of displacement and displacement vectors (in red) at the measured features in 1956 (<b>a</b>) and 1957 (<b>b</b>). Displacement vectors in image scale range from 7.5 m to 22.5 m. Coordinates in WGS84–UTM20S.</p>
Full article ">Figure 6
<p>Terminal point of displacement and displacement vectors (in red) at the measured features in 2007 (©DigitalGlobe Inc., Boulder, Colorado, USA, 2007) (<b>a</b>) and 2019 (©CNES, Paris, France, 2019, Airbus distribution service) (<b>b</b>). Displacement vectors in image scale range from 7.5 m to 22.5 m. Coordinates in WGS84–UTM20S.</p>
Full article ">Figure 7
<p>Surface velocities of displaced features range from 12 cm/year to 36 cm/year. Initial point of velocity vectors (in red) at the measured features in 1956. Includes material ©CNES, Paris, France, 2019 (Airbus distribution service). Coordinates in WGS84–UTM20S.</p>
Full article ">Figure 8
<p>Height profile of the Hurd Rock Glacier in 1956 based on SfM from 1956 and 1957 FIDASE photographic data relative to the 2021 REMA from satellite imagery from 2009 to 2021. A-B-C location in the profile as in <a href="#remotesensing-15-03685-f003" class="html-fig">Figure 3</a>a.</p>
Full article ">Figure A1
<p>Georeferencing ground control points’ (red dots) at orthographic images of 1956 (<b>a</b>), 1957 (<b>b</b>), 2007 (©DigitalGlobe Inc., Boulder, Colorado, USA, 2007) (<b>c</b>) and 2019 (©CNES, Paris, France, 2019, Airbus distribution service) (<b>d</b>) for assessment of visual identification and relative precision.</p>
Full article ">
4 pages, 176 KiB  
Editorial
Cartography of the Solar System: Remote Sensing beyond Earth
by Stephan van Gasselt and Andrea Naß
Remote Sens. 2023, 15(14), 3684; https://doi.org/10.3390/rs15143684 - 24 Jul 2023
Viewed by 1053
Abstract
Cartography is traditionally associated with map making and the visualization of spatial information [...] Full article
(This article belongs to the Special Issue Cartography of the Solar System: Remote Sensing beyond Earth)
21 pages, 11020 KiB  
Article
Effects of Production–Living–Ecological Space Patterns Changes on Land Surface Temperature
by Han Liu, Ling Qin, Menggang Xing, Haiming Yan, Guofei Shang and Yuanyuan Yuan
Remote Sens. 2023, 15(14), 3683; https://doi.org/10.3390/rs15143683 - 24 Jul 2023
Cited by 1 | Viewed by 1402
Abstract
Rapid economic and social development has triggered competition for limited land space from different industries, accelerating the evolution of Beijing’s urban landscape types. The increase in impermeable surfaces and the decrease in ecological land have led to an increase in the impact on [...] Read more.
Rapid economic and social development has triggered competition for limited land space from different industries, accelerating the evolution of Beijing’s urban landscape types. The increase in impermeable surfaces and the decrease in ecological land have led to an increase in the impact on the urban thermal environment. Since previous studies have mainly focused on the impact of a single urban landscape on the urban thermal environment and lacked an exploration of the combined impact of multiple landscapes, this study applied standard deviation ellipses, Pearson correlation analysis, land surface temperature (LST) profile analysis, and hot spot analysis to comprehensively explore the influence of the evolving production–living–ecological space (PLES) pattern on LST. The results show that the average LST of various spaces continued to increase before 2009 and decreased slowly after 2009, with the highest average temperature being living space, followed by production space, and the lowest average temperature being ecological space for each year. The spatiotemporal shift path of the thermal environment is consistent with the shift trajectory of the living space center of gravity in Beijing; LST is positively correlated with living space (LS) and negatively correlated with production space (PS) and ecological space (ES). LST is positively correlated with LS and negatively correlated with PS and ES. Influenced by the change in bedding surface type, the longitudinal thermal profile curve of LST shows a general trend of “low at both ends and high in the middle”. With the change in land space type, LST fluctuates significantly, and the horizontal thermal profile curve shows a general trend of “first decreasing, followed by increasing and finally decreasing”. In addition, the hot spot analysis shows that the coverage area of very hot spots, hot spots, and warm spots increased by 0.72%, 1.13%, and 2.03%, respectively, in the past 30 years, and the main expansion direction is southeast, and very cold spots and cold spots are distributed in the northwest ecological space, and the area change first decreases and then increases. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Land cover classification for 1990, 1999, 2004, 2009, 2014, and 2019 in Beijing.</p>
Full article ">Figure 3
<p>Time series production–living–ecological space area statistic diagram.</p>
Full article ">Figure 4
<p>Land surface temperature for 1990, 1999, 2004, 2009, 2014, and 2019 in Beijing.</p>
Full article ">Figure 5
<p>Framework of PLES change and its effects on LST.</p>
Full article ">Figure 6
<p>LULC transfer flow charts from 1990 to 2019 in Beijing.</p>
Full article ">Figure 7
<p>Distribution of heat levels for 1990, 1999, 2004, 2009, 2014, and 2019 in Beijing.</p>
Full article ">Figure 8
<p>Box plots of land surface temperature for production–living–ecological space in Beijing.</p>
Full article ">Figure 9
<p>Spatial dynamic changes in heat islands from 1996 to 2019 in Beijing.</p>
Full article ">Figure 10
<p>Regression analysis results of PLAND and LST from 1990 to 2019 in Beijing. ** indicates that the correlation was significant at the level of 0.01 (detection &lt; 0.01).</p>
Full article ">Figure 11
<p>The profile of LST from north to south in Beijing.</p>
Full article ">Figure 12
<p>The profile of LST from west to east in Beijing.</p>
Full article ">Figure 13
<p>Hot spots based on Getis–Ord G<sub>i</sub>* in Beijing.</p>
Full article ">
13 pages, 2268 KiB  
Communication
A High-Performance Thin-Film Sensor in 6G for Remote Sensing of the Sea Surface
by Qi Song, Xiaoguang Xu, Jianchen Zi, Jiatong Wang, Zhongze Peng, Bingyuan Zhang and Min Zhang
Remote Sens. 2023, 15(14), 3682; https://doi.org/10.3390/rs15143682 - 24 Jul 2023
Cited by 1 | Viewed by 1657
Abstract
Functional devices in the THz band will provide a highly important technical guarantee for the promotion and application of 6G technology. We sought to design a high-performance sensor with a large area, high responsiveness, and low equivalent noise power, which is stable at [...] Read more.
Functional devices in the THz band will provide a highly important technical guarantee for the promotion and application of 6G technology. We sought to design a high-performance sensor with a large area, high responsiveness, and low equivalent noise power, which is stable at room temperature for long periods and still usable under high humidity; it is suitable for the environment of marine remote sensing technology and has the potential for mass production. We prepared a Te film with high stability and studied its crystallization method by comparing the sensing and detection effects of THz waves at different annealing temperatures. It is proposed that the best crystallization and detection effect is achieved by annealing at 100 °C for 60 min, with a sensitivity of up to 19.8 A/W and an equivalent noise power (NEP) of 2.8 pW Hz−1/2. The effective detection area of the detector can reach the centimeter level, and this level is maintained for more than 2 months in a humid environment at 30 °C with 70–80% humidity and without encapsulation. Considering its advantages of stability, detection performance, large effective area, and easy mass preparation, our Te thin film is an ideal sensor for 6G ocean remote sensing technology. Full article
(This article belongs to the Special Issue Advanced Techniques for Water-Related Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Experimental setup diagram. (<b>a</b>) Microcurrent testing probe platform; (<b>b</b>) THz time-domain spectrometer.</p>
Full article ">Figure 2
<p>Characterization of Te thin films. (<b>a</b>) Raman spectra of Te films at different annealing temperatures. (<b>b</b>) SEM images of film thickness. (<b>c</b>) Raman spectral intensity and incident angle of samples with different annealing temperatures and polarization. (<b>d</b>) Raman spectra of different laser polarization incidence for 100 °C annealing temperature Te film.</p>
Full article ">Figure 3
<p>Characterization of Te thin films in the THz band. (<b>a</b>) THz pulses through thin films at different annealing temperatures. (<b>b</b>) Fourier transform results of (<b>a</b>,<b>c</b>) THz transmittance of thin films with different annealing temperatures. (<b>d</b>) Complex conductivity of thin films with different annealing temperatures in the THz band.</p>
Full article ">Figure 4
<p>Detection performance for the 100 °C annealing temperature, 200 °C annealing temperature, and no annealing temperature Te films at 0.1 THz. (<b>a</b>) Detector total noise (<span class="html-italic">Vn</span>); (<b>b</b>) photoresponsivity (<span class="html-italic">R<sub>A</sub></span>); (<b>c</b>) noise equivalent power (<span class="html-italic">NEP</span>); (<b>d</b>) detectivity (<span class="html-italic">D*</span>). Typical calculated (lines) and measured (dots).</p>
Full article ">Figure 5
<p>Response times of different Te films.</p>
Full article ">
11 pages, 1935 KiB  
Technical Note
Research on Stellar Occultation Detection with Bandpass Filtering for Oxygen Density Retrieval
by Zheng Li, Xiaocheng Wu, Cui Tu, Xiong Hu, Zhaoai Yan, Junfeng Yang and Yanan Zhang
Remote Sens. 2023, 15(14), 3681; https://doi.org/10.3390/rs15143681 - 24 Jul 2023
Cited by 2 | Viewed by 962
Abstract
Stellar occultation instruments detect the transmission of stellar spectra through the planetary atmosphere to retrieve densities of various atmospheric components. This paper introduces an idea of using instruments with bandpass filters for stellar occultation detection. According to the characteristics of the occultation technique [...] Read more.
Stellar occultation instruments detect the transmission of stellar spectra through the planetary atmosphere to retrieve densities of various atmospheric components. This paper introduces an idea of using instruments with bandpass filters for stellar occultation detection. According to the characteristics of the occultation technique for oxygen density measurement, a full-link forward model is established, and the average transmission under a typical nocturnal atmosphere is calculated with the help of the HITRAN database, occultation simulation and a 3D ray-tracing program. The central wavelength and bandwidth suitable for 760 nm oxygen A-band absorption measurement are discussed. This paper also compares the results of the forward model with GOMOS spectrometer data under this band, calculates the observation signal-to-noise ratio corresponding to different instrument parameters, and target star magnitudes. The results of this paper provide a theoretical basis for the development of a stellar occultation technique with a bandpass filter and guidance on the instrument design. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Scheme of a stellar occultation event.</p>
Full article ">Figure 2
<p>O<sub>2</sub> spectral-line intensity data by HITRAN 2004 (<span class="html-italic">T</span> = 296 K).</p>
Full article ">Figure 3
<p>O<sub>2</sub> absorption cross-section (<span class="html-italic">T</span> = 296 K).</p>
Full article ">Figure 4
<p>The transmission varies with wavelength.</p>
Full article ">Figure 5
<p>Average transmission results of GOMOS and forward model in 759.8–761.8 nm.</p>
Full article ">Figure 6
<p>The relationship between bandwidth and max SNR.</p>
Full article ">Figure 7
<p>The relationship between central wavelength and SNR when bandwidth is set to 2 nm.</p>
Full article ">Figure 8
<p>The relationship between SNR and lens radius, apparent magnitude, tangent height.</p>
Full article ">
41 pages, 10735 KiB  
Article
A Thorough Evaluation of 127 Potential Evapotranspiration Models in Two Mediterranean Urban Green Sites
by Nikolaos Proutsos, Dimitris Tigkas, Irida Tsevreni, Stavros G. Alexandris, Alexandra D. Solomou, Athanassios Bourletsikas, Stefanos Stefanidis and Samuel Chukwujindu Nwokolo
Remote Sens. 2023, 15(14), 3680; https://doi.org/10.3390/rs15143680 - 23 Jul 2023
Cited by 5 | Viewed by 2348
Abstract
Potential evapotranspiration (PET) is a particularly important parameter for understanding water interactions and balance in ecosystems, while it is also crucial for assessing vegetation water requirements. The accurate estimation of PET is typically data demanding, while specific climatic, geographical and local factors may [...] Read more.
Potential evapotranspiration (PET) is a particularly important parameter for understanding water interactions and balance in ecosystems, while it is also crucial for assessing vegetation water requirements. The accurate estimation of PET is typically data demanding, while specific climatic, geographical and local factors may further complicate this task. Especially in city environments, where built-up structures may highly influence the micrometeorological conditions and urban green sites may occupy limited spaces, the selection of proper PET estimation approaches is critical, considering also data availability issues. In this study, a wide variety of empirical PET methods were evaluated against the FAO56 Penman–Monteith benchmark method in the environment of two Mediterranean urban green sites in Greece, aiming to investigate their accuracy and suitability under specific local conditions. The methods under evaluation cover all the range of empirical PET estimations: namely, mass transfer-based, temperature-based, radiation-based, and combination approaches, including 112 methods. Furthermore, 15 locally calibrated and adjusted models have been developed based on the general forms of the mass transfer, temperature, and radiation equations, improving the performance of the original models for local application. Among the 127 (112 original and 15 adjusted) evaluated methods, the radiation-based methods and adjusted models performed overall better than the temperature-based and the mass transfer methods, whereas the data-demanding combination methods received the highest ranking scores. The adjusted models seem to give accurate PET estimates for local use, while they might be applied in sites with similar conditions after proper validation. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Map of the sites and (<b>b</b>) photos of the meteorological stations installed in the urban green spaces (UGSs) of (<b>a</b>) Heraklion (S. Greece—Crete island), and (<b>c</b>) Amaroussion (central Greece).</p>
Full article ">Figure 2
<p>Monthly average, minimum and maximum values of (<b>a</b>) air temperature in Heraklion, (<b>b</b>) air temperature in Amaroussion, (<b>c</b>) relative humidity in Heraklion, (<b>d</b>) relative humidity in Amaroussion, (<b>e</b>) wind speed and gust in Heraklion, (<b>f</b>) wind speed and gust in Amaroussion, (<b>g</b>) precipitation in Heraklion and (<b>h</b>) precipitation in Amaroussion.</p>
Full article ">Figure 3
<p>(<b>a</b>) Daily and (<b>b</b>) monthly PET, estimated by the FAO56-PM method at two urban green spaces in the cities of Heraklion and Amaroussion. Vertical lines show the standard deviations.</p>
Full article ">Figure 4
<p>Correlation between daily PET values estimated by the best five mass transfer methods (x-axis) against the benchmark method of FAO56-PM (y-axis) for the two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">Figure 5
<p>Correlation between daily PET values estimated by the best-performing temperature-based methods (x-axis) of the general forms PET = f (T, RH or PR) against the benchmark method of FAO56-PM (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">Figure 6
<p>Correlation between daily ET values estimated by the five best-performing radiation-based methods (x-axis), against the FAO56-PM benchmark method (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">Figure 7
<p>Correlation between daily PET values estimated by the five better-performing combination methods (x-axis) against the FAO56-PM benchmark method (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">Figure 8
<p>Correlation between daily PET estimated by the adjusted models (x-axis) and the benchmark method of FAO56-PM (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line depicts the 1:1 regression.</p>
Full article ">Figure A1
<p>Correlation between daily PET values estimated by different mass transfer methods (x-axis) and the benchmark method of FAO56-PM (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">Figure A2
<p>Correlation between daily PET values estimated by different temperature-based methods (x-axis) of the general forms PET = f (T) and the benchmark method of FAO56-PM (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">Figure A3
<p>Correlation between daily PET values estimated by different temperature-based methods (x-axis) of the general forms PET = f (T, RH or PR) and the benchmark method of FAO56-PM (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">Figure A4
<p>Correlation between daily ET values estimated by different radiation-based methods (x-axis) of the general forms PET = f (Rs) and PET = f (Rs, T) with the benchmark method of FAO56-PM (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">Figure A5
<p>Correlation between daily ET values estimated by different radiation-based methods (x-axis) of the form PET = f (Rs, T, RH) with the benchmark method of FAO56-PM (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">Figure A6
<p>Correlation between daily ET values estimated by different combination methods (x-axis) and the benchmark method of FAO56-PM (y-axis) for two urban green areas in Amaroussion (gray points) and Heraklion (red points) along with the linear regression statistics. The blue line indicates the 1:1 regression.</p>
Full article ">
20 pages, 25724 KiB  
Article
Adaptive Speckle Filter for Multi-Temporal PolSAR Image with Multi-Dimensional Information Fusion
by Haoliang Li, Xingchao Cui, Mingdian Li, Junwu Deng and Siwei Chen
Remote Sens. 2023, 15(14), 3679; https://doi.org/10.3390/rs15143679 - 23 Jul 2023
Cited by 1 | Viewed by 1459
Abstract
Polarimetric synthetic aperture radar (PolSAR) is an important sensor for earth observation. Multi-temporal PolSAR images obtained by successive observations of the region of interest contain rich polarimetric–temporal–spatial information of the land covers, which has wide applications. Speckle filtering becomes a necessary pre-processing for [...] Read more.
Polarimetric synthetic aperture radar (PolSAR) is an important sensor for earth observation. Multi-temporal PolSAR images obtained by successive observations of the region of interest contain rich polarimetric–temporal–spatial information of the land covers, which has wide applications. Speckle filtering becomes a necessary pre-processing for many subsequent applications. Currently, it is common to filter multi-temporal PolSAR data by directly using a speckle filter developed for single SAR or PolSAR data. The cross-correlation between different time series contains rich information in multi-temporal PolSAR images. How to utilize complete polarimetric–temporal–spatial information becomes a large challenge to achieve more satisfied performances of speckle reduction and details preservation simultaneously. This work dedicates to this issue and develops a novel speckle filtering approach for multi-temporal PolSAR data by multi-dimensional information fusion. The core idea is to establish an adaptive and efficient strategy of similar pixel selection based on the similarity test of multi-temporal polarimetric covariance matrices. This similar pixel selection scheme fuses the complete information of multi-temporal PolSAR data. The sensitivity of the proposed scheme is demonstrated with several typical and challenging texture patterns. Then, an adaptive speckle filter is established specifically for multi-temporal PolSAR data. Intensive comparison studies are carried out with airborne UAVSAR datasets and spaceborne ALOS/PALSAR datasets. Quantitative investigations in terms of the equivalent number of looks (ENL) and the figure of merit (FOM) indexes demonstrate and validate the superiority of the proposed method. Full article
(This article belongs to the Special Issue Advance in SAR Image Despeckling)
Show Figures

Figure 1

Figure 1
<p>The 3D scattering information of multi-temporal PolSAR.</p>
Full article ">Figure 2
<p>Similar pixel selection for mixture-feature area.</p>
Full article ">Figure 3
<p>Similar pixel selection for crop-line area.</p>
Full article ">Figure 4
<p>Similar pixel selection for weak-feature area.</p>
Full article ">Figure 5
<p>Flowchart of the proposed MTPCM speckle filter.</p>
Full article ">Figure 6
<p>UAVSAR data. (<b>a</b>) 22 June 2012. (<b>b</b>) 23 June 2012. (<b>c</b>) 25 June 2012.</p>
Full article ">Figure 7
<p>Speckle filtering results for UAVSAR data (22 June 2012). (<b>a</b>) Boxcar. (<b>b</b>) Refined Lee. (<b>c</b>) Improved sigma. (<b>d</b>) IDAN. (<b>e</b>) SimiTest. (<b>f</b>) MTPCM.</p>
Full article ">Figure 8
<p>Speckle filtering comparison for homogeneous areas of UAVSAR data. (<b>a1</b>–<b>a7</b>) ROI1. (<b>b1</b>–<b>b7</b>) ROI2. (<b>c1</b>–<b>c7</b>) ROI3. The numbers 1–7 indicate original, boxcar filtered, refined Lee filtered, improved Sigma filtered, IDAN filtered, SimiTest filtered, and MTPCM filtered data, respectively.</p>
Full article ">Figure 9
<p>Edge detection comparison for ROI4 of UAVSAR data. (<b>a0</b>) Edge ground-truth. (<b>a1</b>–<b>a7</b>) Pauli image. (<b>b1</b>–<b>b7</b>) SPAN image. (<b>c1</b>–<b>c7</b>) Edge detection results. (<b>d1</b>–<b>d7</b>) Binary edge detection results. The numbers 1–7 indicate original, boxcar filtered, refined Lee filtered, improved Sigma filtered, IDAN filtered, SimiTest filtered, and MTPCM filtered data, respectively.</p>
Full article ">Figure 10
<p>Edge detection comparison for ROI5 of UAVSAR data. (<b>a0</b>) Edge ground-truth. (<b>a1</b>–<b>a7</b>) Pauli image. (<b>b1</b>–<b>b7</b>) SPAN image. (<b>c1</b>–<b>c7</b>) Edge detection results. (<b>d1</b>–<b>d7</b>) Binary edge detection results. The numbers 1–7 indicate original, boxcar filtered, refined Lee filtered, improved Sigma filtered, IDAN filtered, SimiTest filtered, and MTPCM filtered data, respectively.</p>
Full article ">Figure 11
<p>Edge detection comparison for ROI6 of UAVSAR data. (<b>a0</b>) Edge ground-truth. (<b>a1</b>–<b>g1</b>) Pauli images from original, boxcar filtered, refined Lee filtered, improved Sigma filtered, IDAN filtered, SimiTest filtered, and MTPCM filtered data, respectively. (<b>a2</b>–<b>g2</b>) The corresponding SPAN images. (<b>a3</b>–<b>g3</b>) The corresponding edge detection results. (<b>a4</b>–<b>g4</b>) The corresponding binary edge detection results.</p>
Full article ">Figure 12
<p>ALOS/PALSAR data. (<b>a</b>) 21 November 2010. (<b>b</b>) 8 April 2011.</p>
Full article ">Figure 13
<p>Speckle filtering results for ALOS/PALSAR data (8 April 2011). (<b>a</b>) Boxcar. (<b>b</b>) Refined Lee. (<b>c</b>) Improved Sigma. (<b>d</b>) IDAN. (<b>e</b>) SimiTest. (<b>f</b>) MTPCM.</p>
Full article ">Figure 14
<p>Speckle filtering results for ALOS/PALSAR data (21 November 2010). (<b>a</b>) Boxcar. (<b>b</b>) Refined Lee. (<b>c</b>) Improved Sigma. (<b>d</b>) IDAN. (<b>e</b>) SimiTest. (<b>f</b>) MTPCM.</p>
Full article ">
18 pages, 5301 KiB  
Article
Urban Flood Risk Assessment through the Integration of Natural and Human Resilience Based on Machine Learning Models
by Wenting Zhang, Bin Hu, Yongzhi Liu, Xingnan Zhang and Zhixuan Li
Remote Sens. 2023, 15(14), 3678; https://doi.org/10.3390/rs15143678 - 23 Jul 2023
Cited by 3 | Viewed by 2530
Abstract
Flood risk assessment and mapping are considered essential tools for the improvement of flood management. This research aims to construct a more comprehensive flood assessment framework by emphasizing factors related to human resilience and integrating them with meteorological and geographical factors. Moreover, two [...] Read more.
Flood risk assessment and mapping are considered essential tools for the improvement of flood management. This research aims to construct a more comprehensive flood assessment framework by emphasizing factors related to human resilience and integrating them with meteorological and geographical factors. Moreover, two ensemble learning models, namely voting and stacking, which utilize heterogeneous learners, were employed in this study, and their prediction performance was compared with that of traditional machine learning models, including support vector machine, random forest, multilayer perceptron, and gradient boosting decision tree. The six models were trained and tested using a sample database constructed from historical flood events in Hefei, China. The results demonstrated the following findings: (1) the RF model exhibited the highest accuracy, while the SVR model underestimated the extent of extremely high-risk areas. The stacking model underestimated the extent of very-high-risk areas. It should be noted that the prediction results of ensemble learning methods may not be superior to those of the base models upon which they are built. (2) The predicted high-risk and very-high-risk areas within the study area are predominantly clustered in low-lying regions along the rivers, aligning with the distribution of hazardous areas observed in historical inundation events. (3) It is worth noting that the factor of distance to pumping stations has the second most significant driving influence after the DEM (Digital Elevation Model). This underscores the importance of considering human resilience factors. This study expands the empirical evidence for the ability of machine learning methods to be employed in flood risk assessment and deepens our understanding of the potential mechanisms of human resilience in influencing urban flood risk. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area in China.</p>
Full article ">Figure 2
<p>Spatial distribution of the sample points.</p>
Full article ">Figure 3
<p>Flood risk assessment framework based on machine learning models.</p>
Full article ">Figure 4
<p>Spatial distribution of flood-influencing factors: (<b>a</b>) DEM, (<b>b</b>) aspect, (<b>c</b>) slope, (<b>d</b>) topographic relief, (<b>e</b>) distance to rivers, (<b>f</b>) distance to pump stations, (<b>g</b>) pipe network density, (<b>h</b>) land use, (<b>i</b>) daily precipitation during flood season.</p>
Full article ">Figure 5
<p>Training and testing ROC curves for each model.</p>
Full article ">Figure 6
<p>ROC curves of testing from the six models.</p>
Full article ">Figure 7
<p>Flood risk maps predicted by the six models.</p>
Full article ">Figure 8
<p>Area statistics of different risk categories for the six models.</p>
Full article ">Figure 9
<p>Contributions of the factors to urban flood risk according to GBDT and RF.</p>
Full article ">
21 pages, 7886 KiB  
Article
Mapping Irish Water Bodies: Comparison of Platforms, Indices and Water Body Type
by Minyan Zhao and Fiachra O’Loughlin
Remote Sens. 2023, 15(14), 3677; https://doi.org/10.3390/rs15143677 - 23 Jul 2023
Cited by 1 | Viewed by 1968
Abstract
Accurate monitoring of water bodies is essential for the management and regulation of water resources. Traditional methods for measuring water quality are always time-consuming and expensive; furthermore, it can be very difficult capture the full spatiotemporal variations across regions. Many studies have shown [...] Read more.
Accurate monitoring of water bodies is essential for the management and regulation of water resources. Traditional methods for measuring water quality are always time-consuming and expensive; furthermore, it can be very difficult capture the full spatiotemporal variations across regions. Many studies have shown the possibility of remote-sensing-based water monitoring work in many areas, especially for water quality monitoring. However, the use of optical remotely sensed imagery depends on several factors, including weather, quality of images and the size of water bodies. Hence, in this study, the feasibility of optical remote sensing for water quality monitoring in the Republic of Ireland was investigated. To assess the value of remote sensing for water quality monitoring, it is critical to know how well water bodies and the existing in situ monitoring stations are mapped. In this study, two satellite platforms (Sentinel-2 MSI and Landsat-8 OLI) and four indices for separating water and land pixel (Normalized Difference Vegetation Index—NDVI; Normalized Difference Water Index—NDWI; Modified Normalized Difference Water Index—MNDWI; and Automated Water Extraction Index—AWEI) have been used to create water masks for two scenarios. In the first scenario (Scenario 1), we included all pixels classified as water, while for the second scenario (Scenario 2) accounts for potential land contamination and only used water pixels that were completed surround by other water pixels. The water masks for the different scenarios and combinations of platforms and indices were then compared with the existing water quality monitoring station and to the shapefile of the river network, lakes and coastal and transitional water bodies. We found that both platforms had potential for water quality monitoring in the Republic of Ireland, with Sentinel-2 outperforming Landsat due to its finer spatial resolution. Overall, Sentinel-2 was able to map ~25% of the existing monitoring station, while Landsat-8 could only map ~21%. These percentages were heavily impacted by the large number of river monitoring stations that were difficult to map with either satellite due to their location on smaller rivers. Our results showed the importance of testing several indices. No index performed the best across the different platforms. AWEInsh (Automated Water Extraction Index—no shadow) and Sentinel-2 outperformed all other combinations and was able to map over 80% of the area of all non-river water bodies across the Republic of Ireland. While MNDWI was the best index for Landsat-8, it was the worst performer for Sentinel-2. This study showed that optical remote sensing has potential for water monitoring in the Republic of Ireland, especially for larger rivers, lakes and transitional and coastal water bodies. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location map showing the Republic of Ireland and its river, lakes and coastal and transitional water bodies.</p>
Full article ">Figure 2
<p>Methodology flowchart. Input data are indicated by cylinders, while output data and processing steps are shown in orange and green, respectively. Maps used in comparison are shown in the gray frame. Remotely sensed imagery from January 2018 to December 2019 was used.</p>
Full article ">Figure 3
<p>Pan-sharpening flowchart for Sentinel-2 data.</p>
Full article ">Figure 4
<p>Comparison of Ostu and Bottom Valley approaches for land–water thresholding (Landsat-8 and NDWI used).</p>
Full article ">Figure 5
<p>Location of the regions used in detailed analysis. Central panel (<b>a</b>) shows the locations of the two regions: Dublin Bay (right-up panel (<b>b</b>)) and Lough Derg (right-down panel (<b>c</b>)).</p>
Full article ">Figure 6
<p>Landsat-8-derived water masks of Lough Derg (<b>top 12 panels</b>) and Dublin Bay (<b>bottom 12 panels</b>) for Scenario 1 and Scenario 2.</p>
Full article ">Figure 7
<p>Sentinel-2-derived water masks of Lough Derg (<b>top 12 panels</b>) and Dublin Bay (<b>bottom 12 panels</b>) for Scenario 1 and Scenario 2.</p>
Full article ">Figure 8
<p>Monitoring stations mapped using the best indices for each platform: (<b>a</b>) Landsat-8 and MNDWI1; (<b>b</b>) all monitoring stations; (<b>c</b>) Sentinel-2 and AWEI<sub>sh</sub>.</p>
Full article ">Figure 9
<p>The percentage of mapped areas for each lake segment with respect to their area for Landsat-8 for Scenario 1. Three different ranges of lake area are highlighted: ① Lake area &lt; 0.1 km<sup>2</sup>; ② 0.1 km<sup>2</sup> &lt; Lake area &lt; 1 km<sup>2</sup>; and ③ Lake area &gt; 1 km<sup>2</sup>.</p>
Full article ">Figure 10
<p>The percentage of mapped areas for each lake segment with regards to their area for Sentinel-2 for Scenario 1. Three different ranges of lake area are highlighted: ① Lake area &lt; 0.1 km<sup>2</sup>; ② 0.1 km<sup>2</sup> &lt; Lake area &lt; 1 km<sup>2</sup>; and ③ Lake area &gt; 1 km<sup>2</sup>.</p>
Full article ">Figure 11
<p>Boxplots showing percentage area of water body mapped by Landsat-8 and Sentinel-2 under Scenario 1 and Scenario 2.</p>
Full article ">
18 pages, 11121 KiB  
Article
Multi-Scale Feature Residual Feedback Network for Super-Resolution Reconstruction of the Vertical Structure of the Radar Echo
by Xiangyu Fu, Qiangyu Zeng, Ming Zhu, Tao Zhang, Hao Wang, Qingqing Chen, Qiu Yu and Linlin Xie
Remote Sens. 2023, 15(14), 3676; https://doi.org/10.3390/rs15143676 - 23 Jul 2023
Viewed by 1086
Abstract
The vertical structure of radar echo is crucial for understanding complex microphysical processes of clouds and precipitation, and for providing essential data support for the study of low-level wind shear and turbulence formation, evolution, and dissipation. Therefore, finding methods to improve the vertical [...] Read more.
The vertical structure of radar echo is crucial for understanding complex microphysical processes of clouds and precipitation, and for providing essential data support for the study of low-level wind shear and turbulence formation, evolution, and dissipation. Therefore, finding methods to improve the vertical data resolution of the existing radar network is crucial. Existing algorithms for improving image resolution usually focus on increasing the width and height of images. However, improving the vertical data resolution of weather radar requires a focus on improving the elevation angle resolution while maintaining distance resolution. To address this challenge, we propose a network for super-resolution reconstruction of weather radar echo vertical structures. The network is based on a multi-scale residual feedback network (MR-FBN) and uses new multi-scale feature residual blocks (MSRB) to effectively extract and utilize data features at different scales. The feedback network gradually generates the final high-resolution vertical structure data. In addition, we propose an elevation upsampling layer (EUL) specifically for this task, replacing the traditional image subpixel convolution layer. Experimental results show that the proposed method can effectively improve the elevation angle resolution of weather radar echo vertical structure data, providing valuable help for atmospheric detection. Full article
(This article belongs to the Special Issue Doppler Radar: Signal, Data and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of VCP11 and VCP21 scanning strategy.</p>
Full article ">Figure 2
<p>The overall structure of MR-FBN.</p>
Full article ">Figure 3
<p>The structure of a multi-scale fusion residual block (MSRB).</p>
Full article ">Figure 4
<p>Upsampling Block: a convolutional layer is used to extract features, and an improved sub-pixel convolutional layer is used to aggregate feature maps in LR space.</p>
Full article ">Figure 5
<p>RXM-25 Radar.</p>
Full article ">Figure 6
<p>Reflectivity distribution, yellow denotes the original data, green denotes the processed data.</p>
Full article ">Figure 7
<p>Case1 Visual comparison with other methods on ×2SR (<b>a</b>) and ×4SR (<b>b</b>).</p>
Full article ">Figure 8
<p>Case2 Visual comparison with other methods on ×2SR (<b>a</b>) and ×4SR (<b>b</b>).</p>
Full article ">Figure 9
<p>Case3 Visual comparison with other methods on ×2SR (<b>a</b>) and ×4SR (<b>b</b>).</p>
Full article ">Figure 10
<p>Case1 Comparison of different methods in different reflectivity intervals on ×2SR (<b>a</b>) and ×4SR (<b>b</b>).</p>
Full article ">Figure 11
<p>Case2 Comparison of different methods in different reflectivity intervals on ×2SR (<b>a</b>) and ×4SR (<b>b</b>).</p>
Full article ">Figure 12
<p>Case3 Comparison of different methods in different reflectivity intervals on ×2SR (<b>a</b>) and ×4SR (<b>b</b>).</p>
Full article ">
30 pages, 6565 KiB  
Review
Google Earth Engine: A Global Analysis and Future Trends
by Andrés Velastegui-Montoya, Néstor Montalván-Burbano, Paúl Carrión-Mero, Hugo Rivera-Torres, Luís Sadeck and Marcos Adami
Remote Sens. 2023, 15(14), 3675; https://doi.org/10.3390/rs15143675 - 23 Jul 2023
Cited by 27 | Viewed by 19312
Abstract
The continuous increase in the volume of geospatial data has led to the creation of storage tools and the cloud to process data. Google Earth Engine (GEE) is a cloud-based platform that facilitates geoprocessing, making it a tool of great interest to the [...] Read more.
The continuous increase in the volume of geospatial data has led to the creation of storage tools and the cloud to process data. Google Earth Engine (GEE) is a cloud-based platform that facilitates geoprocessing, making it a tool of great interest to the academic and research world. This article proposes a bibliometric analysis of the GEE platform to analyze its scientific production. The methodology consists of four phases. The first phase corresponds to selecting “search” criteria, followed by the second phase focused on collecting data during the 2011 and 2022 periods using Elsevier’s Scopus database. Software and bibliometrics allowed to review the published articles during the third phase. Finally, the results were analyzed and interpreted in the last phase. The research found 2800 documents that received contributions from 125 countries, with China and the USA leading as the countries with higher contributions supporting an increment in the use of GEE for the visualization and processing of geospatial data. The intellectual structure study and knowledge mapping showed that topics of interest included satellites, sensors, remote sensing, machine learning, land use and land cover. The co-citations analysis revealed the connection between the researchers who used the GEE platform in their research papers. GEE has proven to be an emergent web platform with the potential to manage big satellite data easily. Furthermore, GEE is considered a multidisciplinary tool with multiple applications in various areas of knowledge. This research adds to the current knowledge about the Google Earth Engine platform, analyzing its cognitive structure related to the research in the Scopus database. In addition, this study presents inferences and suggestions to develop future works with this methodology. Full article
(This article belongs to the Special Issue Google Earth Engine for Geo-Big Data Applications)
Show Figures

Figure 1

Figure 1
<p>Scheme of the methodology applied in this research.</p>
Full article ">Figure 2
<p>Evolution of scientific production on GEE, considering (i) annual publications: number of publications per year, and (ii) cited documents: number of citations registered per year.</p>
Full article ">Figure 3
<p>Map of countries that have conducted studies using the GEE platform, according to the number of publications.</p>
Full article ">Figure 4
<p>Countries network.</p>
Full article ">Figure 5
<p>Main subject areas of GEE research in Scopus.</p>
Full article ">Figure 6
<p>Satellites and sensors most mentioned in publications about GEE from 2011 to 2022.</p>
Full article ">Figure 7
<p>Main remote sensing applications studied in GEE and their evolution over time.</p>
Full article ">Figure 8
<p>Co-occurrence author keyword network.</p>
Full article ">Figure 9
<p>Co-authorship network. (<b>a</b>) Co-authorship by country. (<b>b</b>) Co-authorship by author.</p>
Full article ">Figure 10
<p>Co-citation network of cited authors.</p>
Full article ">Figure 11
<p>Journal co-citation network.</p>
Full article ">
21 pages, 5263 KiB  
Article
Precision Aquaculture Drone Mapping of the Spatial Distribution of Kappaphycus alvarezii Biomass and Carrageenan
by Nurjannah Nurdin, Evangelos Alevizos, Rajuddin Syamsuddin, Hasni Asis, Elmi Nurhaidah Zainuddin, Agus Aris, Simon Oiry, Guillaume Brunier, Teruhisa Komatsu and Laurent Barillé
Remote Sens. 2023, 15(14), 3674; https://doi.org/10.3390/rs15143674 - 23 Jul 2023
Cited by 5 | Viewed by 2645
Abstract
The aquaculture of Kappaphycus alvarezii (Kappaphycus hereafter) seaweed has rapidly expanded among coastal communities in Indonesia due to its relatively simple farming process, low capital costs and short production cycles. This species is mainly cultivated for its carrageenan content used as a [...] Read more.
The aquaculture of Kappaphycus alvarezii (Kappaphycus hereafter) seaweed has rapidly expanded among coastal communities in Indonesia due to its relatively simple farming process, low capital costs and short production cycles. This species is mainly cultivated for its carrageenan content used as a gelling agent in the food industry. To further assist producers in improving cultivation management and providing quantitative information about the yield, a novel approach involving remote sensing techniques was tested. In this study, multispectral images obtained from a drone (Unoccupied Aerial Vehicle, UAV) were processed to estimate the fresh and carrageenan weights of Kappaphycus at a cultivation site in South Sulawesi. The UAV imagery was geometrically and radiometrically corrected, and the resulting orthomosaics were used for detecting and classifying Kappaphycus using a random forest algorithm. The classification results were combined with in situ measurements of Kappaphycus fresh weight and carrageenan content using empirical relations between the area and weight of fresh seaweed/carrageenan. This approach allowed quantifying seaweed biometry and biochemistry at single cultivation lines and cultivation plot scales. Fresh seaweed and carrageenan weights were estimated for different dates within three distinct cultivation cycles, and the daily growth rate for each cycle was derived. Data were upscaled to a small family-scale farm and a large-scale leader farm and compared with previous estimations. To our knowledge, this study provides, for the first time, an estimation of yield at the scale of cultivation lines by exploiting the very high spatial resolution of drone data. Overall, the use of UAV remote sensing proved to be a promising approach for seaweed monitoring, opening the way to precision aquaculture of Kappaphycus. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) Indonesian archipelago with the red rectangle indicating South West Sulawesi; the sub-district of Punaga with the black rectangle showing the study area. (<b>B</b>) Individual thalli of the green variant of <span class="html-italic">Kappaphycus alvarezii</span> cultivated in Punaga. (<b>C</b>) False color orthomosaic of the farming area showing cultivation plots (=parcels) with a variable number of long lines. (<b>D</b>) Close-up corresponding to the black rectangle in C; ca. 32 lines with <span class="html-italic">Kappaphycus</span> can be seen. Plastic bottles used as floats appear white.</p>
Full article ">Figure 2
<p>Workflow diagram including image processing, classification and geospatial analyses tasks followed in this study.</p>
Full article ">Figure 3
<p>False color close-ups of cultivation lines overlaid with an example of: (<b>A</b>) Training polygons, (<b>B</b>) Validation set with pixel-based interpretations.</p>
Full article ">Figure 4
<p>Relationship between <span class="html-italic">Kappaphycus</span> area (cm<sup>2</sup>) and fresh weight (g) obtained from in situ samples (dots) collected during three cultivation cycles in Punaga (South Sulawesi). The power model (red line) is represented with its 95% confidence intervals (blue lines).</p>
Full article ">Figure 5
<p>Average ranking of predictor variable importance, based on the Gini decrease score resulting from nine classification runs (one for each date with drone data, <a href="#remotesensing-15-03674-t003" class="html-table">Table 3</a>). 1: high importance, 5: low importance.</p>
Full article ">Figure 6
<p>Mapping of a <span class="html-italic">Kappaphycus</span> cultivation plot. The cultivation plot has 32 lines of 25 m; an isolated line can be seen on the right part of each image. (<b>A</b>) False-color mosaic of the first date (t<sub>0</sub>) of cycle 1. (<b>B</b>) Random forest classification of the scene, (<b>C</b>) Spatial distribution of fresh weight per unit area, (<b>D</b>) Spatial distribution of carrageenan weight per unit area. The area is defined by a neighborhood of a 20 cm radius around each pixel.</p>
Full article ">Figure 7
<p>Comparison of four monitored lines of 25 m illustrating the increase in carrageenan between the start (t<sub>0</sub>) and the end of a cultivation cycle (t<sub>40</sub>). Carrageenan is expressed in weight per unit area (g·m<sup>−2</sup>). The area is defined by a neighborhood of a 20 cm radius around each pixel. The values at the bottom of each line indicate the total weight of carrageenan produced by the corresponding line.</p>
Full article ">Figure A1
<p>Boxplots resulted from an example training set showing the class separation at each of the six predictor orthomosaics (K = <span class="html-italic">Kappaphycus</span>). Crosses indicate outliers. The bottom and top of the blue rectangles represent the 25th and 75th percentiles respectively, whereas the red line indicates the median value. The whiskers extend to the minimum and maximum values that are not considered outliers (i.e., they are no more than ±2.7 σ apart).</p>
Full article ">Figure A2
<p><span class="html-italic">Kappaphycus</span> increase in fresh weight (<b>A</b>) and carrageenan weight (<b>B</b>) during three cultivation cycles in Punaga (South Sulawesi) in 2022 estimated from drone imagery. The weights are expressed per linear meter of cultivation lines. Boxplots resulted from samples of 4–8 individual monitoring lines. The bottom and top of the blue rectangles represent the 25th and 75th percentiles respectively, whereas the red line indicates the median value. The whiskers extend to the minimum and maximum values that are not considered outliers (i.e., they are no more than ±2.7 σ apart).</p>
Full article ">
21 pages, 6823 KiB  
Article
Research on an Intra-Pulse Orthogonal Waveform and Methods Resisting Interrupted-Sampling Repeater Jamming within the Same Frequency Band
by Huahua Dai, Yingxiao Zhao, Hanning Su, Zhuang Wang, Qinglong Bao and Jiameng Pan
Remote Sens. 2023, 15(14), 3673; https://doi.org/10.3390/rs15143673 - 23 Jul 2023
Cited by 4 | Viewed by 1110
Abstract
Interrupted-sampling repeater jamming (ISRJ) is a kind of intra-pulse coherent deception jamming that can generate false target peaks in the range profile and interfere with the detection and tracking of real targets. In this paper, an anti-ISRJ method based on the intra-pulse orthogonal [...] Read more.
Interrupted-sampling repeater jamming (ISRJ) is a kind of intra-pulse coherent deception jamming that can generate false target peaks in the range profile and interfere with the detection and tracking of real targets. In this paper, an anti-ISRJ method based on the intra-pulse orthogonal waveform is proposed, which can recognize common interference signals by comparing sub-signal matched filtering results. For some special scenes where real targets cannot be directly differentiated from false targets, a new recognition method based on the energy discontinuity of the interference signal in the time domain is proposed in this paper. The method proposed in this paper can recognize real and false targets in all ISRJ modes without any prior information, such as jammer parameters, with a small amount of calculation, which is suitable for actual radar systems. Simulation experiments using different interference parameters show that although this method has a 3 dB loss of pulse compression gain, it can completely suppress different kinds of ISRJ interference when the SNR before pulse compression is higher than −20 dB, with 100% target detection probability. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

Figure 1
<p>Three kinds of repeater jamming modes of jammers. (<b>a</b>) The direct repeater jamming mode, (<b>b</b>) the repeated repeater jamming mode, and (<b>c</b>) the periodic repeater jamming mode.</p>
Full article ">Figure 2
<p>The curve of the cross-correlation coefficient versus <math display="inline"><semantics><mrow><msub><mi>T</mi><mi mathvariant="normal">p</mi></msub><mi>B</mi></mrow></semantics></math>.</p>
Full article ">Figure 3
<p>The intra-pulse orthogonal waveform. (<b>a</b>) The time–frequency diagram and (<b>b</b>) the ambiguity function.</p>
Full article ">Figure 4
<p>The overall scheme of an anti-ISRJ interference method based on waveform design.</p>
Full article ">Figure 5
<p>The processing results of echo waveforms with interference under direct repeater jamming mode. (<b>a</b>) The time–frequency diagram (noise free); (<b>b</b>) the time–frequency diagram (SNR = −15 dB); (<b>c</b>) the echo of the sub-signal 1 after matched filtering; (<b>d</b>) the echo of the sub-signal 2 after matched filtering.</p>
Full article ">Figure 5 Cont.
<p>The processing results of echo waveforms with interference under direct repeater jamming mode. (<b>a</b>) The time–frequency diagram (noise free); (<b>b</b>) the time–frequency diagram (SNR = −15 dB); (<b>c</b>) the echo of the sub-signal 1 after matched filtering; (<b>d</b>) the echo of the sub-signal 2 after matched filtering.</p>
Full article ">Figure 6
<p>The processing results of echo waveforms with interference under direct repeater jamming mode and repeated repeater jamming mode. (<b>a</b>) The time–frequency diagram (noise-free); (<b>b</b>) the time–frequency diagram (SNR = −15 dB); (<b>c</b>) the echo of the sub-signal 1 after matched filtering; (<b>d</b>) the echo of the sub-signal 2 after matched filtering.</p>
Full article ">Figure 7
<p>The proportion of the energy accumulation of IIS in each segment when <math display="inline"><semantics><mrow><mi>H</mi><mo>=</mo><mn>12</mn></mrow></semantics></math> (the yellow dotted line denotes the average proportion). (<b>a</b>) Peak 1, (<b>b</b>) Peak 2, (<b>c</b>) Peak 3, (<b>d</b>) Peak 4, (<b>e</b>) Peak 5, (<b>f</b>) Peak 6, (<b>g</b>) Peak 7, and (<b>h</b>) Peak 8.</p>
Full article ">Figure 8
<p>The IIS amplitude accumulation curve <math display="inline"><semantics><mrow><mi>C</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> for each target peak. (<b>a</b>) Peak 1, (<b>b</b>) Peak 2, (<b>c</b>) Peak 3, (<b>d</b>) Peak 4, (<b>e</b>) Peak 5, (<b>f</b>) Peak 6, (<b>g</b>) Peak 7, and (<b>h</b>) Peak 8.</p>
Full article ">Figure 9
<p>The proportion of the energy accumulation of IIS in each segment (the yellow dotted line denotes the average proportion). (<b>a</b>) <math display="inline"><semantics><mrow><mi>H</mi><mo>=</mo><mn>6</mn></mrow></semantics></math>, peak 2, (<b>b</b>) <math display="inline"><semantics><mrow><mi>H</mi><mo>=</mo><mn>6</mn></mrow></semantics></math>, peak 3, (<b>c</b>) <math display="inline"><semantics><mrow><mi>H</mi><mo>=</mo><mn>2</mn></mrow></semantics></math>, peak 2, (<b>d</b>) <math display="inline"><semantics><mrow><mi>H</mi><mo>=</mo><mn>2</mn></mrow></semantics></math>, peak 5.</p>
Full article ">Figure 10
<p>The fitted piecewise straight line <math display="inline"><semantics><mrow><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> and the second-order difference curve <math display="inline"><semantics><mrow><msup><mo>Δ</mo><mn>2</mn></msup><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math>. (<b>a</b>) The fitted piecewise straight line <math display="inline"><semantics><mrow><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> for peak 3; (<b>b</b>) the fitted piecewise straight line <math display="inline"><semantics><mrow><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> for peak 5; (<b>c</b>) the second-order difference curve <math display="inline"><semantics><mrow><msup><mo>Δ</mo><mn>2</mn></msup><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> for peak 3; (<b>d</b>) the second-order difference curve <math display="inline"><semantics><mrow><msup><mo>Δ</mo><mn>2</mn></msup><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> for peak 5.</p>
Full article ">Figure 10 Cont.
<p>The fitted piecewise straight line <math display="inline"><semantics><mrow><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> and the second-order difference curve <math display="inline"><semantics><mrow><msup><mo>Δ</mo><mn>2</mn></msup><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math>. (<b>a</b>) The fitted piecewise straight line <math display="inline"><semantics><mrow><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> for peak 3; (<b>b</b>) the fitted piecewise straight line <math display="inline"><semantics><mrow><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> for peak 5; (<b>c</b>) the second-order difference curve <math display="inline"><semantics><mrow><msup><mo>Δ</mo><mn>2</mn></msup><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> for peak 3; (<b>d</b>) the second-order difference curve <math display="inline"><semantics><mrow><msup><mo>Δ</mo><mn>2</mn></msup><mi>E</mi><mfenced><mi>k</mi></mfenced></mrow></semantics></math> for peak 5.</p>
Full article ">Figure 11
<p>The curves of detection probability versus SNR.</p>
Full article ">Figure 12
<p>The IIS standard deviation of the segmented energy integrals in 1000 Monte Carlo simulations when <span class="html-italic">H</span> = 12. (<b>a</b>) Maximum value; (<b>b</b>) minimum value; (<b>c</b>) average value.</p>
Full article ">
17 pages, 11842 KiB  
Article
Regional Climate Effects of Irrigation under Central Asia Warming by 2.0 °C
by Liyang Wu and Hui Zheng
Remote Sens. 2023, 15(14), 3672; https://doi.org/10.3390/rs15143672 - 23 Jul 2023
Cited by 1 | Viewed by 1151
Abstract
There has been a severe shortage of water resources in Central Asia and agriculture has been highly dependent on irrigation because of the scarce precipitation in the croplands. Central Asia is also experiencing climate warming in the context of global warming; however, few [...] Read more.
There has been a severe shortage of water resources in Central Asia and agriculture has been highly dependent on irrigation because of the scarce precipitation in the croplands. Central Asia is also experiencing climate warming in the context of global warming; however, few studies have focused on changes in the amount of irrigation in Central Asia under future climate warming and their regional climate effects. In this study, we adopted the Weather Research and Forecasting (WRF) model to design three types of experiments: historical experiments (Exp01); warming experiments using future driving fields (Exp02); and warming experiments that involved increasing the surface energy (Exp03). In each type of experiment, two experiments (considering and not considering irrigation) were carried out. We analyzed the regional climate effects of irrigation under the warming of Central Asia by 2.0 °C through determining the differences between the two types of warming experiments and the historical experiments. For surface variables (irrigation amount; sensible heat flux; latent heat flux; and surface air temperature), the changes (relative to Exp01) in Exp03 were thought to be reasonable. For precipitation, the changes (relative to Exp01) in Exp02 were thought to be reasonable. The main conclusions were as follows: in Central Asia, after warming by 2.0 °C, the irrigation amount increased by 10–20%; in the irrigated croplands of Central Asia, the irrigation-caused increases (decreases) in latent heat flux (sensible heat flux) further expanded; and then the irrigation-caused decreases in surface air temperature also became enhanced; during the irrigation period, the irrigation-caused increases in precipitation in the mid-latitude mountainous areas were reduced. This study also showed that, in the WRF model, the warming experiments caused by driving fields were not suitable to simulate the changes in irrigation amount affected by climate warming. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The altitude (unit: m) of the simulated region. (<b>b</b>) Land use type in the simulated domain. (<b>c</b>) Irrigation fraction (unit: %) in the simulated domain.</p>
Full article ">Figure 2
<p>Spatial differences ((<b>a</b>) CTL in Exp02 minus CTL in Exp01; (<b>b</b>) CTL in Exp03 minus CTL in Exp01) in multi-year (20 years) annual mean surface air temperature (unit: °C). The 20 years denoted were from 1995 to 2014 in Exp 01 and Exp03, and from 2036 to 2055 in Exp02 (the same below).</p>
Full article ">Figure 3
<p>(<b>a</b>) Non-irrigation period mean and (<b>b</b>) irrigation period mean precipitation (unit: mm day<sup>−1</sup>) (CTL in Exp01). Spatial differences (CTL in Exp02 minus CTL in Exp01) in (<b>c</b>) non-irrigation period mean and (<b>d</b>) irrigation period mean precipitation (unit: mm day<sup>−1</sup>). Spatial differences (CTL in Exp03 minus CTL in Exp01) in (<b>e</b>) non-irrigation period mean and (<b>f</b>) irrigation period mean precipitation (unit: mm day<sup>−1</sup>). The results shown were all the multi-year (20 years) mean values.</p>
Full article ">Figure 4
<p>Annual mean (<b>a</b>) latent heat flux and (<b>b</b>) sensible heat flux (unit: W m<sup>−2</sup>) (CTL in Exp01). Spatial differences (CTL in Exp02 minus CTL in Exp01) in annual mean (<b>c</b>) latent heat flux and (<b>d</b>) sensible heat flux (unit: W m<sup>−2</sup>). Spatial differences (CTL in Exp03 minus CTL in Exp01) in annual mean (<b>e</b>) latent heat flux and (<b>f</b>) sensible heat flux (unit: W m<sup>−2</sup>). The results shown were all the multi-year (20 years) mean values.</p>
Full article ">Figure 5
<p>Spatial distributions of multi-year (20 years) irrigation period mean irrigation rate in (<b>a</b>) Exp01 (unit: mm day<sup>−1</sup>), (<b>b</b>) (Exp02 minus Exp01)/Exp01 (unit: %), and (<b>c</b>) (Exp03 minus Exp01)/Exp01 (unit: %).</p>
Full article ">Figure 6
<p>Spatial differences (SEN in Exp01 minus CTL in Exp01) in (<b>a</b>) latent heat flux (unit: W m<sup>−2</sup>), (<b>b</b>) sensible heat flux (unit: W m<sup>−2</sup>), and (<b>c</b>) surface air temperature (unit: °C). Spatial differences ((SEN in Exp02 minus CTL in Exp02) minus (SEN in Exp01 minus CTL in Exp01)) in (<b>d</b>) latent heat flux (unit: W m<sup>−2</sup>), (<b>e</b>) sensible heat flux (unit: W m<sup>−2</sup>), and (<b>f</b>) surface air temperature (unit: °C). Spatial differences ((SEN in Exp03 minus CTL in Exp03) minus (SEN in Exp01 minus CTL in Exp01)) in (<b>g</b>) latent heat flux (unit: W m<sup>−2</sup>), (<b>h</b>) sensible heat flux (unit: W m<sup>−2</sup>), and (<b>i</b>) surface air temperature (unit: °C). The results shown were the multi-year (20 years) irrigation period mean values.</p>
Full article ">Figure 7
<p>(<b>a</b>) Spatial differences (SEN in Exp01 minus CTL in Exp01) in multi-year (20 years) irrigation period mean precipitation (unit: mm day<sup>−1</sup>). (<b>b</b>) Spatial differences ((SEN in Exp02 minus CTL in Exp02) minus (SEN in Exp01 minus CTL in Exp01)) in multi-year (20 years) irrigation period mean precipitation (unit: mm day<sup>−1</sup>). (<b>c</b>) Spatial differences ((SEN in Exp03 minus CTL in Exp03) minus (SEN in Exp01 minus CTL in Exp01)) in multi-year (20 years) irrigation period mean precipitation (unit: mm day<sup>−1</sup>).</p>
Full article ">Figure 8
<p>Height–longitude cross sections (averaged over the latitude range of simulation domain) of differences ((<b>a</b>) CTL in Exp02 minus CTL in Exp01; (<b>b</b>) CTL in Exp03 minus CTL in Exp01)) in multi-year (20 years) irrigation period mean temperature (unit: °C). Height–longitude cross sections (averaged over the latitude range of simulation domain) of differences ((<b>c</b>) CTL in Exp02 minus CTL in Exp01; (<b>d</b>) CTL in Exp03 minus CTL in Exp01)) in multi-year (20 years) irrigation period mean water vapor mixing ratio (unit: g kg<sup>−1</sup>).</p>
Full article ">Figure 9
<p>Spatial differences ((<b>a</b>) CTL in Exp02 minus CTL in Exp01; (<b>b</b>) CTL in Exp03 minus CTL in Exp01) in multi-year (20 years) irrigation period mean low cloud cover (unit: %). Low cloud cover is for 0.8 &lt; sigma &lt; 1.0 (sigma = pressure/surface pressure).</p>
Full article ">
19 pages, 13191 KiB  
Article
Automatic Monitoring of Maize Seedling Growth Using Unmanned Aerial Vehicle-Based RGB Imagery
by Min Gao, Fengbao Yang, Hong Wei and Xiaoxia Liu
Remote Sens. 2023, 15(14), 3671; https://doi.org/10.3390/rs15143671 - 23 Jul 2023
Cited by 6 | Viewed by 1512
Abstract
Accurate and rapid monitoring of maize seedling growth is critical in early breeding decision making, field management, and yield improvement. However, the number and uniformity of seedlings are conventionally determined by manual evaluation, which is inefficient and unreliable. In this study, we proposed [...] Read more.
Accurate and rapid monitoring of maize seedling growth is critical in early breeding decision making, field management, and yield improvement. However, the number and uniformity of seedlings are conventionally determined by manual evaluation, which is inefficient and unreliable. In this study, we proposed an automatic assessment method of maize seedling growth using unmanned aerial vehicle (UAV) RGB imagery. Firstly, high-resolution images of maize at the early and late seedling stages (before and after the third leaf) were acquired using the UAV RGB system. Secondly, the maize seedling center detection index (MCDI) was constructed, resulting in a significant enhancement of the color contrast between young and old leaves, facilitating the segmentation of maize seedling centers. Furthermore, the weed noise was removed by morphological processing and a dual-threshold method. Then, maize seedlings were extracted using the connected component labeling algorithm. Finally, the emergence rate, canopy coverage, and seedling uniformity in the field at the seedling stage were calculated and analyzed in combination with the number of seedlings. The results revealed that our approach showed good performance for maize seedling count with an average R2 greater than 0.99 and an accuracy of F1 greater than 98.5%. The estimation accuracies at the third leaf stage (V3) for the mean emergence rate and the mean seedling uniformity were 66.98% and 15.89%, respectively. The estimation accuracies at the sixth leaf stage (V6) for the mean seedling canopy coverage and the mean seedling uniformity were 32.21% and 8.20%, respectively. Our approach provided the automatic monitoring of maize growth per plot during early growth stages and demonstrated promising performance for precision agriculture in seedling management. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>The flowchart of the proposed methodology.</p>
Full article ">Figure 2
<p>Geographical position of the study area.</p>
Full article ">Figure 3
<p>Maize images in typical test area: (<b>a</b>) ROI1 (V3 stage), (<b>b</b>) ROI2 (V3 stage), (<b>c</b>) ROI3 (V6 stage), (<b>d</b>) ROI4 (V6 stage).</p>
Full article ">Figure 4
<p>Spectral analysis of maize leaves: (<b>a</b>) Original image, (<b>b</b>) leaf G-band scatter plot, (<b>c</b>) leaf B-band scatter plot.</p>
Full article ">Figure 5
<p>Algorithm result diagram: (<b>a</b>) Binarized images based on Otsu algorithm, (<b>b</b>) morphological processing for noise removal, (<b>c</b>) weed noise elimination schematic, (<b>d</b>) maize seedling counting based on connected component labeling.</p>
Full article ">Figure 6
<p>The binary images of Otsu segmentation using different vegetation indices in ROI1 (V3 stage): (<b>a</b>) GBDI, (<b>b</b>) ExG, (<b>c</b>) ExR, (<b>d</b>) ExG − ExR, (<b>e</b>) NGRDI, (<b>f</b>) GLI, (<b>g</b>) Cg, (<b>h</b>) MCDI.</p>
Full article ">Figure 7
<p>The binary images of Otsu segmentation using different vegetation indices in ROI2 (V3 stage): (<b>a</b>) GBDI, (<b>b</b>) ExG, (<b>c</b>) ExR, (<b>d</b>) ExG − ExR, (<b>e</b>) NGRDI, (<b>f</b>) GLI, (<b>g</b>) Cg, (<b>h</b>) MCDI.</p>
Full article ">Figure 8
<p>The binary images of Otsu segmentation using different vegetation indices in ROI3 (V6 stage): (<b>a</b>) GBDI, (<b>b</b>) ExG, (<b>c</b>) ExR, (<b>d</b>) ExG − ExR, (<b>e</b>) NGRDI, (<b>f</b>) GLI, (<b>g</b>) Cg, (<b>h</b>) MCDI.</p>
Full article ">Figure 9
<p>The binary images of Otsu segmentation using different vegetation indices in ROI4 (V6 stage): (<b>a</b>) GBDI, (<b>b</b>) ExG, (<b>c</b>) ExR, (<b>d</b>) ExG − ExR, (<b>e</b>) NGRDI, (<b>f</b>) GLI, (<b>g</b>) Cg, (<b>h</b>) MCDI.</p>
Full article ">Figure 10
<p>Histogram of extraction accuracy of maize seedling number.</p>
Full article ">Figure 11
<p>Accuracy statistics of sampling plots at different seedling stages: (<b>a</b>) V3 stage, (<b>b</b>) V6 stage.</p>
Full article ">Figure 12
<p>Comparative analysis of real and detected seedlings: (<b>a</b>) V3 stage, (<b>b</b>) V6 stage.</p>
Full article ">Figure 13
<p>Comparative analysis of overall accuracy of sampling plots at different seedling stages.</p>
Full article ">Figure 14
<p>Error analysis: (<b>a</b>) Omission error, (<b>b</b>) commission error.</p>
Full article ">Figure 15
<p>Spatial distribution of emergence rate and canopy coverage for maize: (<b>a</b>) emergence rate, (<b>b</b>) canopy coverage.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop