[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = TIR image fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 37756 KiB  
Article
Hyperspectral Anomaly Detection Using Spatial–Spectral-Based Union Dictionary and Improved Saliency Weight
by Sheng Lin, Min Zhang, Xi Cheng, Shaobo Zhao, Lei Shi and Hai Wang
Remote Sens. 2023, 15(14), 3609; https://doi.org/10.3390/rs15143609 - 19 Jul 2023
Cited by 5 | Viewed by 1379
Abstract
Hyperspectral anomaly detection (HAD), which is widely used in military and civilian fields, aims to detect the pixels with large spectral deviation from the background. Recently, collaborative representation using union dictionary (CRUD) was proved to be effective for achieving HAD. However, the existing [...] Read more.
Hyperspectral anomaly detection (HAD), which is widely used in military and civilian fields, aims to detect the pixels with large spectral deviation from the background. Recently, collaborative representation using union dictionary (CRUD) was proved to be effective for achieving HAD. However, the existing CRUD detectors generally only use the spatial or spectral information to construct the union dictionary (UD), which possibly causes a suboptimal performance and may be hard to use in actual scenarios. Additionally, the anomalies are treated as salient relative to the background in a hyperspectral image (HSI). In this article, a HAD method using spatial–spectral-based UD and improved saliency weight (SSUD-ISW) is proposed. To construct robust UD for each testing pixel, a spatial-based detector, a spectral-based detector and superpixel segmentation are jointly considered to yield the background set and anomaly set, which provides pure and representative pixels to form a robust UD. Differently from the conventional operation that uses the dual windows to construct the background dictionary in the local region and employs the RX detector to construct the anomaly dictionary in a global scope, we developed a robust UD construction strategy in a nonglobal range by sifting the pixels closest to the testing pixel from the background set and anomaly set to form the UD. With a preconstructed UD, a CRUD is performed, and the product of the anomaly dictionary and corresponding representation coefficient is explored to yield the response map. Moreover, an improved saliency weight is proposed to fully mine the saliency characteristic of the anomalies. To further improve the performance, the response map and saliency weight are combined with a nonlinear fusion strategy. Extensive experiments performed on five datasets (i.e., Salinas, Texas Coast, Gainesville, San Diego and SpecTIR datasets) demonstrate that the proposed SSUD-ISW detector achieves the satisfactory AUCdf values (i.e., 0.9988, 0.9986, 0.9939, 0.9945 and 0.9997), as compared to the comparative detectors whose best AUCdf values are 0.9938, 0.9956, 0.9833, 0.9919 and 0.9991. Full article
(This article belongs to the Special Issue Computational Intelligence in Hyperspectral Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The schematic of the proposed SSUD-ISW detector using the Texas Coast dataset (see <a href="#sec5dot1dot1-remotesensing-15-03609" class="html-sec">Section 5.1.1</a>). Note that <math display="inline"><semantics><mrow><msub><mi mathvariant="bold-italic">x</mi><mi mathvariant="bold-italic">i</mi></msub></mrow></semantics></math> represents the <span class="html-italic">i</span>th testing pixel; <math display="inline"><semantics><mrow><msubsup><mi mathvariant="bold">D</mi><mi mathvariant="bold-italic">i</mi><mi mathvariant="bold-italic">B</mi></msubsup></mrow></semantics></math>, <math display="inline"><semantics><mrow><msubsup><mi mathvariant="bold">D</mi><mi mathvariant="bold-italic">i</mi><mi mathvariant="bold-italic">A</mi></msubsup></mrow></semantics></math> and <math display="inline"><semantics><mrow><msub><mi mathvariant="bold">D</mi><mi mathvariant="bold-italic">i</mi></msub></mrow></semantics></math> separately stand for the background dictionary, anomaly dictionary and union dictionary corresponding to the <span class="html-italic">i</span>th testing pixel; <math display="inline"><semantics><mrow><msubsup><mi mathvariant="bold-italic">α</mi><mi mathvariant="bold-italic">i</mi><mi mathvariant="bold-italic">B</mi></msubsup></mrow></semantics></math>, <math display="inline"><semantics><mrow><msubsup><mi mathvariant="bold-italic">α</mi><mi mathvariant="bold-italic">i</mi><mi mathvariant="bold-italic">A</mi></msubsup></mrow></semantics></math> and <math display="inline"><semantics><mrow><msub><mi mathvariant="bold-italic">α</mi><mi mathvariant="bold-italic">i</mi></msub></mrow></semantics></math> indicate the representation coefficients corresponding to the background dictionary, anomaly dictionary and union dictionary belonging to the <span class="html-italic">i</span>th testing pixel, respectively; and <math display="inline"><semantics><mrow><msub><mi mathvariant="bold-italic">e</mi><mi mathvariant="bold-italic">i</mi></msub></mrow></semantics></math> refers to the residual vector of the <span class="html-italic">i</span>th testing pixel.</p>
Full article ">Figure 2
<p>The schematic of the construction of the background set and anomaly set using the Texas Coast dataset (see <a href="#sec5dot1dot1-remotesensing-15-03609" class="html-sec">Section 5.1.1</a>).</p>
Full article ">Figure 3
<p>Visualization of the saliency weights under different situations for all datasets (see <a href="#sec5dot1dot1-remotesensing-15-03609" class="html-sec">Section 5.1.1</a>). Numbers (<b>I</b>–<b>III</b>) are the saliency weights obtained by Equation (5), the saliency weights acquired by Equation (23) and the saliency weights yielded by Equation (25), respectively. Letters (<b>a</b>–<b>e</b>) indicate the Salinas, Texas Coast, Gainesville, San Diego and SpecTIR datasets, respectively.</p>
Full article ">Figure 4
<p>Visualization of the hyperspectral datasets: (<b>a</b>) color composites and (<b>b</b>) reference map. (<b>I</b>) Salinas, (<b>II</b>) Texas Coast, (<b>III</b>) Gainesville, (<b>IV</b>) San Diego and (<b>V</b>) SpecTIR.</p>
Full article ">Figure 5
<p>With the parameters tuning over all datasets.</p>
Full article ">Figure 6
<p>With the parameters <span class="html-italic">k<sub>B</sub></span> and <span class="html-italic">k<sub>A</sub></span> tuning over all datasets.</p>
Full article ">Figure 7
<p>Histogram of AUC<span class="html-italic"><sub>df</sub></span> of detection results obtained by using different union dictionary construction strategies over all datasets.</p>
Full article ">Figure 8
<p>Histogram of AUC<span class="html-italic"><sub>df</sub></span> of detection results obtained by using different saliency weights over all datasets.</p>
Full article ">Figure 9
<p>Detection maps belonging to various detectors over the Salinas dataset.</p>
Full article ">Figure 10
<p>ROC curves belonging to various detectors over the Salinas dataset. (<b>a</b>–<b>d</b>) The ROC curves introduced in <a href="#sec5dot1dot2-remotesensing-15-03609" class="html-sec">Section 5.1.2</a>.</p>
Full article ">Figure 11
<p>Detection maps belonging to various detectors over the Texas Coast dataset.</p>
Full article ">Figure 12
<p>ROC curves belonging to various detectors over the Texas Coast dataset. (<b>a</b>–<b>d</b>) The ROC curves introduced in <a href="#sec5dot1dot2-remotesensing-15-03609" class="html-sec">Section 5.1.2</a>.</p>
Full article ">Figure 13
<p>Detection maps belonging to various detectors over the Gainesville dataset.</p>
Full article ">Figure 14
<p>ROC curves belonging to various detectors over the Gainesville dataset. (<b>a</b>–<b>d</b>) The ROC curves introduced in <a href="#sec5dot1dot2-remotesensing-15-03609" class="html-sec">Section 5.1.2</a>.</p>
Full article ">Figure 15
<p>Detection maps belonging to various detectors over the San Diego dataset.</p>
Full article ">Figure 16
<p>ROC curves belonging to various detectors over the San Diego dataset. (<b>a</b>–<b>d</b>) The ROC curves introduced in <a href="#sec5dot1dot2-remotesensing-15-03609" class="html-sec">Section 5.1.2</a>.</p>
Full article ">Figure 17
<p>Detection maps belonging to various detectors over the SpecTIR dataset.</p>
Full article ">Figure 18
<p>ROC curves belonging to various detectors over the SpecTIR dataset. (<b>a</b>–<b>d</b>) The ROC curves introduced in <a href="#sec5dot1dot2-remotesensing-15-03609" class="html-sec">Section 5.1.2</a>.</p>
Full article ">Figure 19
<p>Separability maps regarding the background and anomaly of various detectors over all datasets. The characters (i.e., “a”~“i”) on the horizontal axis are RX, CRD, CRDBPSW, LSMAD, LSDM-MoG, RGAE, GAED, NJCR and the proposed SSUD-ISW detector, respectively.</p>
Full article ">
24 pages, 36195 KiB  
Article
Multi-Temporal Remote Sensing Inversion of Evapotranspiration in the Lower Yangtze River Based on Landsat 8 Remote Sensing Data and Analysis of Driving Factors
by Enze Song, Xueying Zhu, Guangcheng Shao, Longjia Tian, Yuhao Zhou, Ao Jiang and Jia Lu
Remote Sens. 2023, 15(11), 2887; https://doi.org/10.3390/rs15112887 - 1 Jun 2023
Cited by 4 | Viewed by 1597
Abstract
Analysis of the spatial and temporal variation patterns of surface evapotranspiration is important for understanding global climate change, promoting scientific deployment of regional water resources, and improving crop yield and water productivity. Based on Landsat 8 OIL_TIRS data and remote sensing image data [...] Read more.
Analysis of the spatial and temporal variation patterns of surface evapotranspiration is important for understanding global climate change, promoting scientific deployment of regional water resources, and improving crop yield and water productivity. Based on Landsat 8 OIL_TIRS data and remote sensing image data of the lower Yangtze River urban cluster for the same period of 2016–2021, combined with soil and meteorological data of the study area, this paper constructed a multiple linear regression (MLR) model and an extreme learning machine (ELM) inversion model with evapotranspiration as the target and, based on the model inversion, quantitatively and qualitatively analyzed the spatial and temporal variability in surface evapotranspiration in the study area in the past six years. The results show that both models based on feature factors and spectral indices obtained a good inversion accuracy, with the fusion of feature factors effectively improving the inversion ability of the model for ET. The best model for ET in 2016, 2017, and 2021 was MLR, with an R2 greater than 0.8; the best model for ET in 2018–2019 was ELM, with an R2 of 0.83 and 0.62, respectively. The inter-annual ET in the study area showed a “double-peak” dynamic variation, with peaks in 2018 and 2020; the intra-annual ET showed a single-peak cycle, with peaks in July–August. Seasonal differences were obvious, and spatially high-ET areas were mainly found in rural areas north of the Yangtze River and central and western China where agricultural land is concentrated. The net solar radiation, soil heat flux, soil temperature and humidity, and fractional vegetation cover all had significant positive effects on ET, with correlation coefficients ranging from 0.39 to 0.94. This study can provide methodological and scientific support for the quantitative and qualitative estimation of regional ET. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Study area location map.</p>
Full article ">Figure 2
<p>ELM model structure diagram.</p>
Full article ">Figure 3
<p>Multicollinearity analysis.</p>
Full article ">Figure 4
<p>Results of cluster analysis: (<b>a</b>) 2016 spectral data; (<b>b</b>) 2017 spectral data; (<b>c</b>) 2018 spectral data; (<b>d</b>) 2019 spectral data; (<b>e</b>) 2021 spectral data.</p>
Full article ">Figure 5
<p>Model accuracy verification diagrams: (<b>a</b>) 2016 model accuracy; (<b>b</b>) 2017 model accuracy; (<b>c</b>) 2018 model accuracy; (<b>d</b>) 2019 model accuracy; (<b>e</b>) 2021 model accuracy.</p>
Full article ">Figure 6
<p>ET inter-annual variation.</p>
Full article ">Figure 7
<p>ET inter-monthly variation.</p>
Full article ">Figure 8
<p>Spatial distribution of inter-annual ET.</p>
Full article ">Figure 9
<p>Spatial distribution of ET seasons in 2017. Note: ET is the inverse value of the February evapotranspiration; ΔET is the increment in ET in each month compared to the ET in February.</p>
Full article ">Figure 10
<p>Inter-monthly variation in ecological factors: (<b>a</b>) inter-month variation in net solar radiation; (<b>b</b>) inter-month variation in heat flux; (<b>c</b>) inter-month variation in soil moisture; (<b>d</b>) inter-month variation in soil temperature.</p>
Full article ">Figure 11
<p>Environmental factor–ET correlation analysis: (<b>a</b>) correlation analysis of net solar radiation and ET; (<b>b</b>) correlation analysis of heat flux and ET; (<b>c</b>) correlation analysis of soil moisture and ET; (<b>d</b>) correlation analysis of soil temperature and ET.</p>
Full article ">Figure 12
<p>Trends in monthly mean ET and NDVI values in the study area: (<b>a</b>) 2016; (<b>b</b>) 2017; (<b>c</b>) 2018; (<b>d</b>) 2019; (<b>e</b>) 2021.</p>
Full article ">Figure 13
<p>FVC spatial distribution 2016–2021.</p>
Full article ">Figure 14
<p>Spatial distribution of ET–FVC correlation coefficients.</p>
Full article ">
18 pages, 8569 KiB  
Article
A CNN-Based Layer-Adaptive GCPs Extraction Method for TIR Remote Sensing Images
by Lixing Zhao, Jingjie Jiao, Lan Yang, Wenhao Pan, Fanjun Zeng, Xiaoyan Li and Fansheng Chen
Remote Sens. 2023, 15(10), 2628; https://doi.org/10.3390/rs15102628 - 18 May 2023
Cited by 3 | Viewed by 1818
Abstract
Ground Control Points (GCPs) are of great significance for applications involving the registration and fusion of heterologous remote sensing images (RSIs). However, utilizing low-level information rather than deep features, traditional methods based on intensity and local image features turn out to be unsuitable [...] Read more.
Ground Control Points (GCPs) are of great significance for applications involving the registration and fusion of heterologous remote sensing images (RSIs). However, utilizing low-level information rather than deep features, traditional methods based on intensity and local image features turn out to be unsuitable for heterologous RSIs because of the large nonlinear radiation difference (NRD), inconsistent resolutions, and geometric distortions. Additionally, the limitations of current heterologous datasets and existing deep-learning-based methods make it difficult to obtain enough precision GCPs from different kinds of heterologous RSIs, especially for thermal infrared (TIR) images that present low spatial resolution and poor contrast. In this paper, to address the problems above, we propose a convolutional neural network-based (CNN-based) layer-adaptive GCPs extraction method for TIR RSIs. Particularly, the constructed feature extraction network is comprised of basic and layer-adaptive modules. The former is used to achieve the coarse extraction, and the latter is designed to obtain high-accuracy GCPs by adaptively updating the layers in the module to capture the fine communal homogenous features of the heterologous RSIs until the GCP precision meets the preset threshold. Experimental results evaluated on TIR images of SDGSAT-1 TIS and near infrared (NIR), short wave infrared (SWIR), and panchromatic (PAN) images of Landsat-8 OLI show that the matching root-mean-square error (RMSE) of TIS images with SWIR and NIR images could reach 0.8 pixels and an even much higher accuracy of 0.1 pixels could be reached between TIS and PAN images, which performs better than those of the traditional methods, such as SIFT, RIFT, and the CNN-based method like D2-Net. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Framework of the proposed layer-adaptive GCPs extraction method.</p>
Full article ">Figure 2
<p>Architecture of the CNN feature extractor. AP and MP stand for average pooling and maximum pooling operations.</p>
Full article ">Figure 3
<p>Test datasets. The size of all the SWIR, NIR, and TIR images is 500 × 500, and the size of the PAN images is 1000 × 1000.</p>
Full article ">Figure 4
<p>The number of key points extracted from the feature map and the number of correct matching key points when different pooling operations and parameters are used. MP in the horizontal coordinates represents maximum pooling, AP represents average pooling, and the number in parentheses represents the pooling window size. The bar chart indicates the number of key points extracted from the feature map, and the line chart indicates the number of correct matches obtained.</p>
Full article ">Figure 5
<p>The results of four methods. The red lines and the yellow lines indicate the correct matches and the outliers, respectively.</p>
Full article ">Figure 5 Cont.
<p>The results of four methods. The red lines and the yellow lines indicate the correct matches and the outliers, respectively.</p>
Full article ">
20 pages, 10826 KiB  
Article
Analysis of Spectral Separability for Detecting Burned Areas Using Landsat-8 OLI/TIRS Images under Different Biomes in Brazil and Portugal
by Admilson da Penha Pacheco, Juarez Antonio da Silva Junior, Antonio Miguel Ruiz-Armenteros, Renato Filipe Faria Henriques and Ivaneide de Oliveira Santos
Forests 2023, 14(4), 663; https://doi.org/10.3390/f14040663 - 23 Mar 2023
Cited by 8 | Viewed by 2182
Abstract
Fire is one of the natural agents with the greatest impact on the terrestrial ecosystem and plays an important ecological role in a large part of the terrestrial surface. Remote sensing is an important technique applied in mapping and monitoring changes in forest [...] Read more.
Fire is one of the natural agents with the greatest impact on the terrestrial ecosystem and plays an important ecological role in a large part of the terrestrial surface. Remote sensing is an important technique applied in mapping and monitoring changes in forest landscapes affected by fires. This study presents a spectral separability analysis for the detection of burned areas using Landsat-8 OLI/TIRS images in the context of fires that occurred in different biomes of Brazil (dry ecosystem) and Portugal (temperate forest). The research is based on a fusion of spectral indices and automatic classification algorithms scientifically proven to be effective with as little human interaction as possible. The separability index (M) and the Reed–Xiaoli automatic anomaly detection classifier (RXD) allowed the evaluation of the spectral separability and the thematic accuracy of the burned areas for the different spectral indices tested (Burn Area Index (BAI), Normalized Burn Ratio (NBR), Mid-Infrared Burn Index (MIRBI), Normalized Burn Ratio 2 (NBR2), Normalized Burned Index (NBI), and Normalized Burn Ratio Thermal (NBRT)). The analysis parameters were based on spatial dispersion with validation data, commission error (CE), omission error (OE), and the Sørensen–Dice coefficient (DC). The results indicated that the indices based exclusively on the SWIR1 and SWIR2 bands showed a high degree of separability and were more suitable for detecting burned areas, although it was observed that the characteristics of the soil affected the performance of the indices. The classification method based on bitemporal anomalous changes using the RXD anomaly proved to be effective in increasing the burned area in terms of temporal alteration and performing unsupervised detection without relying on the ground truth. On the other hand, the main limitations of RXD were observed in non-abrupt changes, which is very common in fires with low spectral signal, especially in the context of using Landsat-8 images with a 16-day revisit period. The results obtained in this work were able to provide critical information for fire mapping algorithms and for an accurate post-fire spatial estimation in dry ecosystems and temperate forests. The study presents a new comparative approach to classify burned areas in dry ecosystems and temperate forests with the least possible human interference, thus helping investigations when there is little available data on fires in addition to favoring a reduction in fieldwork and gross errors in the classification of burned areas. Full article
(This article belongs to the Special Issue Forest Fires Prediction and Detection)
Show Figures

Figure 1

Figure 1
<p>Location of study areas in Brazil and Portugal as well as pre- and post-fire images.</p>
Full article ">Figure 2
<p>Flowchart of the burned area detection process.</p>
Full article ">Figure 3
<p>Spatial distribution of true positive (TP) pixels (where true burned areas were detected) for the study case of Brazil. Classification method: unsupervised RXD Anomaly. (<b>a</b>) BAI. (<b>b</b>) MIRBI. (<b>c</b>) NBI. (<b>d</b>) NBR. (<b>e</b>) NBR2. (<b>f</b>) NBRT.</p>
Full article ">Figure 4
<p>Spatial distribution of true positive (TP) pixels for the study area of Portugal. Classification method: unsupervised RXD Anomaly. (<b>a</b>) BAI. (<b>b</b>) MIRBI. (<b>c</b>) NBI. (<b>d</b>) NBR. (<b>e</b>) NBR2. (<b>f</b>) NBRT.</p>
Full article ">Figure 5
<p>Burned area in relation to the reference maps of Brazil (<b>a</b>) and Portugal (<b>b</b>).</p>
Full article ">
12 pages, 2960 KiB  
Article
Assessment of Small-Extent Forest Fires in Semi-Arid Environment in Jordan Using Sentinel-2 and Landsat Sensors Data
by Bassam Qarallah, Yahia A. Othman, Malik Al-Ajlouni, Hadeel A. Alheyari and Bara’ah A. Qoqazeh
Forests 2023, 14(1), 41; https://doi.org/10.3390/f14010041 - 26 Dec 2022
Cited by 7 | Viewed by 2554
Abstract
The objective of this study was to evaluate the separability potential of Sentinel-2A (MultiSpectral Instrument, MSI) and Landsat (Operational Land Imager, OLI and Thermal Infrared Sensor, TIRS) derived indices for detecting small-extent (<25 ha) forest fires areas and severity degrees. Three remote sensing [...] Read more.
The objective of this study was to evaluate the separability potential of Sentinel-2A (MultiSpectral Instrument, MSI) and Landsat (Operational Land Imager, OLI and Thermal Infrared Sensor, TIRS) derived indices for detecting small-extent (<25 ha) forest fires areas and severity degrees. Three remote sensing indices [differenced Normalized Burn Ratio (dNBR), differenced Normalized Different Vegetation Index (dNDVI), and differenced surface temperature (dTST)] were used at three forest fires sites located in Northern Jordan; Ajloun (total burned area 23 ha), Dibbeen (burned area 10.5), and Sakeb (burned area 15 ha). Compared to ground reference data, Sentinel-2 MSI was able to delimit the fire perimeter more precisely than Landsat-8. The accuracy of detecting burned area (area of coincidence) in Sentinel-2 was 7%–26% higher that Landsat-8 OLI across sites. In addition, Sentinel-2 reduced the omission area by 28%–43% and the commission area by 6%–38% compared to Landsat-8 sensors. Higher accuracy in Sentinel-2 was attributed to higher spatial resolution and lower mixed pixel problem across the perimeter of burned area (mixed pixels within the fire perimeter for Sentinel-2, 8.5%–13.5% vs. 31%–52% for Landsat OLI). In addition, dNBR had higher accuracy (higher coincidence values and less omission and commission) than dNDVI and dTST. In terms of fire severity degrees, dNBR (the best fire index candidate) derived from both satellites sensors were only capable of detecting the severe spots “severely-burned” with producer accuracy >70%. In fact, the dNBR-Sentinel-2/Landsat-8 overall accuracy and Kappa coefficient for classifying fire severity degree were less than 70% across the studied sites, except for Sentinel-dNBR in Dibbeen (72.5%). In conclusion, Sentinel-dNBR and Landsat promise to delimitate forest fire perimeters of small-scale (<25 ha) areas, but further remotely-sensed techniques are require (e.g., Landsat-Sentinel data fusion) to improve the fire severity-separability potential. Full article
(This article belongs to the Special Issue Mapping Forest Vegetation via Remote Sensing Tools)
Show Figures

Figure 1

Figure 1
<p>Location of the Ajloun, Dibbeen and Sakeb forest fire sites, Northern Jordan.</p>
Full article ">Figure 2
<p>Landsat (OLI/ETM+) and Sentinel-2 (MSI) differenced Normalized Burn Ratio (dNBR), Normalized Different Vegetation Index (dNDVI) and thermal surface temperature (dTST) for Ajloun, Dibbeen and Sakeb wildfires, Northern Jordan.</p>
Full article ">
16 pages, 6371 KiB  
Article
Estimation of Nitrogen Content in Winter Wheat Based on Multi-Source Data Fusion and Machine Learning
by Fan Ding, Changchun Li, Weiguang Zhai, Shuaipeng Fei, Qian Cheng and Zhen Chen
Agriculture 2022, 12(11), 1752; https://doi.org/10.3390/agriculture12111752 - 23 Oct 2022
Cited by 10 | Viewed by 2744
Abstract
Nitrogen (N) is an important factor limiting crop productivity, and accurate estimation of the N content in winter wheat can effectively monitor the crop growth status. The objective of this study was to evaluate the ability of the unmanned aerial vehicle (UAV) platform [...] Read more.
Nitrogen (N) is an important factor limiting crop productivity, and accurate estimation of the N content in winter wheat can effectively monitor the crop growth status. The objective of this study was to evaluate the ability of the unmanned aerial vehicle (UAV) platform with multiple sensors to estimate the N content of winter wheat using machine learning algorithms; to collect multispectral (MS), red-green-blue (RGB), and thermal infrared (TIR) images to construct a multi-source data fusion dataset; to predict the N content in winter wheat using random forest regression (RFR), support vector machine regression (SVR), and partial least squares regression (PLSR). The results showed that the mean absolute error (MAE) and relative root-mean-square error (rRMSE) of all models showed an overall decreasing trend with an increasing number of input features from different data sources. The accuracy varied among the three algorithms used, with RFR achieving the highest prediction accuracy with an MAE of 1.616 mg/g and rRMSE of 12.333%. For models built with single sensor data, MS images achieved a higher accuracy than RGB and TIR images. This study showed that the multi-source data fusion technique can enhance the prediction of N content in winter wheat and provide assistance for decision-making in practical production. Full article
(This article belongs to the Special Issue Remote Sensing Technologies in Agricultural Crop and Soil Monitoring)
Show Figures

Figure 1

Figure 1
<p>Test field located in Henan Province, China. (<b>a</b>) Overview of China, (<b>b</b>) overview of Henan Province, (<b>c</b>) trial field layout, and (<b>d</b>) field photos.</p>
Full article ">Figure 2
<p>Changes in meteorological data for the main growing season of winter wheat in 2022. (<b>a</b>) Daily maximum and minimum temperatures, (<b>b</b>) daily average carbon dioxide concentration, and (<b>c</b>) average humidity in the test plots.</p>
Full article ">Figure 3
<p>Unmanned aerial vehicles (UAVs) and sensors. (<b>a</b>) DJI M210 UAV, (<b>b</b>) DJI Phantom 4 Pro UAV, (<b>c</b>) multispectral (MS) camera, (<b>d</b>) thermal infrared (TIR) camera, and (<b>e</b>) red-green-blue (RGB) camera.</p>
Full article ">Figure 4
<p>A workflow diagram of data processing, feature extraction, and modeling.</p>
Full article ">Figure 5
<p>Vegetation and bare soil segmentation. (<b>a</b>) The local RGB image and (<b>b</b>) the corresponding local vegetation-soil segmentation.</p>
Full article ">Figure 6
<p>Canopy shadow segmentation. (<b>a</b>) RGB local map and green band histogram of random plots, (<b>b</b>) image and its histogram obtained by increasing saturation and changing brightness in (<b>a</b>), (<b>c</b>) RGB local image, and (<b>d</b>) canopy shadow segmentation of (<b>c</b>).</p>
Full article ">Figure 7
<p>Correlation analysis of canopy shade coverage (CSC) and N content of winter wheat.</p>
Full article ">Figure 8
<p>Correlation between CSC and N content at different dates.</p>
Full article ">Figure 9
<p>(<b>a</b>) Variation of mean absolute error (MAE) and (<b>b</b>) variation of relative root mean square error (rRMSE) for prediction results of different models with different input features and quantities.</p>
Full article ">Figure 10
<p>Scatter plot of measured and predicted values of N content for (<b>a</b>) random forest regression (RFR), (<b>b</b>) support vector machine regression (SVR) and (<b>c</b>) partial least squares regression (PLSR) models.</p>
Full article ">
21 pages, 4801 KiB  
Article
Thermal Infrared Tracking Method Based on Efficient Global Information Perception
by Long Zhao, Xiaoye Liu, Honge Ren and Lingjixuan Xue
Sensors 2022, 22(19), 7408; https://doi.org/10.3390/s22197408 - 29 Sep 2022
Cited by 2 | Viewed by 1534
Abstract
To solve the insufficient ability of the current Thermal InfraRed (TIR) tracking methods to resist occlusion and interference from similar targets, we propose a TIR tracking method based on efficient global information perception. In order to efficiently obtain the global semantic information of [...] Read more.
To solve the insufficient ability of the current Thermal InfraRed (TIR) tracking methods to resist occlusion and interference from similar targets, we propose a TIR tracking method based on efficient global information perception. In order to efficiently obtain the global semantic information of images, we use the Transformer structure for feature extraction and fusion. In the feature extraction process, the Focal Transformer structure is used to improve the efficiency of remote information modeling, which is highly similar to the human attention mechanism. The feature fusion process supplements the relative position encoding to the standard Transformer structure, which allows the model to continuously consider the influence of positional relationships during the learning process. It can also generalize to capture the different positional information for different input sequences. Thus, it makes the Transformer structure model the semantic information contained in images more efficiently. To further improve the tracking accuracy and robustness, the heterogeneous bi-prediction head is utilized in the object prediction process. The fully connected sub-network is responsible for the classification prediction of the foreground or background. The convolutional sub-network is responsible for the regression prediction of the object bounding box. In order to alleviate the contradiction between the vast demand for training data of the Transformer model and the insufficient scale of the TIR tracking dataset, the LaSOT-TIR dataset is generated with the generative adversarial network for network training. Our method achieves the best performance compared with other state-of-the-art trackers on the VOT2015-TIR, VOT2017-TIR, PTB-TIR and LSOTB-TIR datasets, and performs outstandingly especially when dealing with severe occlusion or interference from similar objects. Full article
(This article belongs to the Special Issue Sensing and Processing for Infrared Vision: Methods and Applications)
Show Figures

Figure 1

Figure 1
<p>The performance of current methods in dealing with the challenge of interference from similar objects.</p>
Full article ">Figure 2
<p>The overall framework of the TIR target tracker.</p>
Full article ">Figure 3
<p>The feature extraction backbone network.</p>
Full article ">Figure 4
<p>The Focal Transformer block. <span class="html-italic">FSA</span> is the focal−based self−attention module. <span class="html-italic">LN</span> is the layer normalization layer. MLP is the multi-layer perception network.</p>
Full article ">Figure 5
<p>The focus self-attention mechanism in the window level.</p>
Full article ">Figure 6
<p>The structure of feature fusion network.</p>
Full article ">Figure 7
<p>The focus self-attention in the window level. fc is the fully connected layer. conv is the convolutional layer.</p>
Full article ">Figure 8
<p>Different sequences in the same image. (<b>a</b>) is the sequence of images in normal order, and (<b>b</b>) is the sequence of images out of order.</p>
Full article ">Figure 9
<p>The framework of pix2pix network.</p>
Full article ">Figure 10
<p>The structure of U-Net network.</p>
Full article ">Figure 11
<p>The visualization of tracking results in the sequence trees.</p>
Full article ">Figure 12
<p>The visualization of tracking results in the sequence saturated.</p>
Full article ">Figure 13
<p>Comparison results with existing methods in the PTB-TIR dataset.</p>
Full article ">Figure 14
<p>Comparison results with the existing methods in the LSOTB_TIR dataset.</p>
Full article ">Figure 15
<p>The visualization of tracking results in the sequence person_S_014.</p>
Full article ">Figure 16
<p>The visualization of tracking results in the sequence deer_H_001.</p>
Full article ">Figure 17
<p>The visualization of tracking results in the sequence leopard_H_001.</p>
Full article ">
13 pages, 2001 KiB  
Article
Improvement of Wheat Growth Information by Fusing UAV Visible and Thermal Infrared Images
by Jun Yu, Chengquan Zhou and Jinling Zhao
Agronomy 2022, 12(9), 2087; https://doi.org/10.3390/agronomy12092087 - 1 Sep 2022
Cited by 2 | Viewed by 1339
Abstract
To improve and enrich the wheat growth information, visible and thermal infrared (TIR) remote sensing images were simultaneously acquired by an unmanned aerial vehicle (UAV). A novel algorithm was proposed for fusing the visible and TIR images combing the intensity hue saturation (IHS) [...] Read more.
To improve and enrich the wheat growth information, visible and thermal infrared (TIR) remote sensing images were simultaneously acquired by an unmanned aerial vehicle (UAV). A novel algorithm was proposed for fusing the visible and TIR images combing the intensity hue saturation (IHS) transform and regional variance matching (RVM). After registering the two images, IHS transform was first conducted to derive the Intensities of two images. Wavelet transform was then applied to the Intensities for obtaining the coefficients of low- and high-frequency sub-bands. The fusion rules of the fused image were developed based on regional correlation of wavelet decomposition coefficients. More specifically, the coefficients of low-frequency sub-bands were calculated by averaging the coefficients of two images. Regional variance was used to generate the coefficients of high-frequency sub-bands using the weighted template of a 3 × 3 pixel window. The inverse wavelet transform was used to create the new Intensity for the fused image using the low- and high-frequency coefficients. Finally, the inverse IHS transform consisting of the new Intensity, the Hue of visible image, and the Saturation of TIR image was adopted to change the IHS space to red–green–blue (RGB) color space. The fusion effects were validated by the visible and TIR images of winter wheat at the jointing stage and the middle and late grain-filling stage. Meanwhile, IHS and RV were also comparatively evaluated for validating our proposed method. The proposed algorithm can fully consider the correlation of wavelet coefficients in local regions. It overcomes the shortcomings (e.g., block phenomenon, color distortion) of traditional image fusion methods to obtain smooth, detailed and high-resolution images. Full article
Show Figures

Figure 1

Figure 1
<p>The UAV platform for simultaneously mounting the HD digital and TIR cameras.</p>
Full article ">Figure 2
<p>The general workflow of the proposed method to fuse the visible and TIR images. IHS is the intensity hue saturation; WT is the wavelet transform; and <span class="html-italic">H</span>, <span class="html-italic">S</span>, <span class="html-italic">I</span> refer to the hue, saturation, intensity, respectively.</p>
Full article ">Figure 3
<p>Comparison of fused images using our proposed method, reginal variance and IHS transform for Stage I. (<b>a</b>) Original visible image; (<b>b</b>) Original TIR image; (<b>c</b>) IHS-RVM; (<b>d</b>) Regional variance; and (<b>e</b>) IHS transform.</p>
Full article ">Figure 4
<p>Comparison of fused images using our proposed method, reginal variance and IHS transform for Stage II. (<b>a</b>) Original visible image; (<b>b</b>) Original TIR image; (<b>c</b>) IHS-RVM; (<b>d</b>) Regional variance; and (<b>e</b>) IHS transform.</p>
Full article ">
26 pages, 18440 KiB  
Article
A Fusion of Feature-Oriented Principal Components of Multispectral Data to Map Granite Exposures of Pakistan
by Shahab Ud Din, Khan Muhammad, Muhammad Fawad Akbar Khan, Shahid Bashir, Muhammad Sajid and Asif Khan
Appl. Sci. 2021, 11(23), 11486; https://doi.org/10.3390/app112311486 - 3 Dec 2021
Cited by 2 | Viewed by 2320
Abstract
Despite low spatial resolutions, thermal infrared bands (TIRs) are generally more suitable for mineral mapping due to fundamental tones and high penetration in vegetated areas compared to shortwave infrared (SWIR) bands. However, the weak overtone combinations of SWIR bands for minerals can be [...] Read more.
Despite low spatial resolutions, thermal infrared bands (TIRs) are generally more suitable for mineral mapping due to fundamental tones and high penetration in vegetated areas compared to shortwave infrared (SWIR) bands. However, the weak overtone combinations of SWIR bands for minerals can be compensated by fusing SWIR-bearing data (Sentinel-2 and Landsat-8) with other multispectral data containing fundamental tones from TIR bands. In this paper, marble in a granitic complex in Mardan District (Khyber Pakhtunkhwa) in Pakistan is discriminated by fusing feature-oriented principal component selection (FPCS) obtained from the ASTER, Landsat-8 Operational Land Imager (OLI), Thermal Infrared Sensor (TIRS) and Sentinel-2 MSI data. Cloud computing from Google Earth Engine (GEE) was used to apply FPCS before and after the decorrelation stretching of Landsat-8, ASTER, and Sentinel-2 MSI data containing five (5) bands in the Landsat-8 OLI and TIRS and six (6) bands each in the ASTER and Sentinel-2 MSI datasets, resulting in 34 components (i.e., 2 × 17 components). A weighted linear combination of selected three components was used to map granite and marble. The samples collected during field visits and petrographic analysis confirmed the remote sensing results by revealing the region’s precise contact and extent of marble and granite rock types. The experimental results reflected the theoretical advantages of the proposed approach compared with the conventional stacking of band data for PCA-based fusion. The proposed methodology was also applied to delineate granite deposits in Karoonjhar Mountains, Nagarparker (Sindh province) and the Kotah Dome, Malakand (Khyber Pakhtunkhwa Province) in Pakistan. The paper presents a cost-effective methodology by the fusion of FPCS components for granite/marble mapping during mineral resource estimation. The importance of SWIR-bearing components in fusion represents minor minerals present in granite that could be used to model the engineering properties of the rock mass. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

Figure 1
<p>Location map of the study area in Pakistan with the region of interest (ROI).</p>
Full article ">Figure 2
<p>Locations of granite leases (ML-1 and ML-2) at Shewa Shahbazghari, granite leases (ML-3 and ML-4), and marble mining leases (MMLs) at Ambela Complex. The regions of dense vegetation (DV) and light vegetation (LV) are overlaid on the Sentinel-2 Multispectral Instrument (MSI) mosaicked image. Seven points P1–P7 show different validation points during the field visit.</p>
Full article ">Figure 3
<p>Schematic chart of the applied methodology.</p>
Full article ">Figure 4
<p>(<b>a</b>) Scatter plots showing the effects of the principal component analysis (PCA) and decorrelation stretching of Landsat-8 OLI bands pixel data (raw Landsat-8 B5 (NIR) vs. B6—SWIR1 (blue), and B7—SWIR2 (red)); (<b>b</b>) scatter plots of B5 (NIR) vs. B6—SWIR1 (blue), and B7—SWIR2 (red) after the application of decorrelation stretching on raw Landsat-8 data; (<b>c</b>) scatter plots of PC1 vs. PC2 (blue) and PC3 (red) after the application of PCA on the raw Landsat-8 datasets; and (<b>d</b>) Scatter plots of PC1 vs. PC2 (blue) and PC3 (red) after the application of PCA on the stretched Landsat-8 datasets.</p>
Full article ">Figure 5
<p>Spectral profiles of granite, marble, and vegetation in Shewa Shahbazgarhi Mardan.</p>
Full article ">Figure 6
<p>USGS spectral library plots of different granite minerals found in the case studies.</p>
Full article ">Figure 7
<p>(<b>a</b>) Landsat-8 PC4 of the raw dataset showing granite as darker and medium-toned and marble shown in dark; (<b>b</b>) PC3 of the raw dataset showing marble and vegetation in brighter pixels and granite in dark; (<b>c</b>) PC3 of the stretched Landsat-8 data showing granite in dark to medium-toned.</p>
Full article ">Figure 8
<p>(<b>a</b>) PC1 of the raw ASTER data (the negative loadings of all bands generally represented albedo and topographic effects); (<b>b</b>) PC6 of the raw ASTER data showing granite in darker and medium-toned pixels with very little information about marble due to the lower resolution of the ASTER images; (<b>c</b>) PC5 of the stretched ASTER data showing granite in brighter to medium-toned pixels and marble in darker pixels.</p>
Full article ">Figure 9
<p>(<b>a</b>) PC3 of the stretched Sentinel-2 MSI data showing brighter and medium-toned granite and darker vegetation; (<b>b</b>) PC3 of the raw Sentinel-2 MSI data showing granite as medium-toned and vegetation in brighter pixels; (<b>c</b>) PC4 of the raw Sentinel-2 MSI data showing granite as brighter and medium-toned and vegetation in darker pixels. Marble is shown in brighter pixels in all these components.</p>
Full article ">Figure 10
<p>(<b>a</b>) Weighted false-color composite (FCC) image from selected FPCS components (stretched Sentinel-2 MSI data (PC3), raw Landsat (PC4), and stretched Landsat data (PC3) weights were 1.5, 0.375, and 1.25, respectively). Granite occurrences were observed at P1 (72°20′31″ E, 34°20′26″ N). DV was observed at P2 (72°20′50″ E, 34°20′21″ N), and low vegetation (LV) was observed at P3 (72°21′7″ E, 34°20′12″ N). While granite was easily discernible in non-vegetated areas at P4 (72°24′29.5″ E, 34°17′47″ N) and P5 (72°24′17″ E, 34°17′51″ N), and marble outcrops were observed at P6 (72°28′7″ E, 34°17′36″ N) and P7 (72°28′10″, 34°17′37″ N); (<b>b</b>) FCC image of PC4, PC5, and PC6 of PCA applied to the stacked raw bands of all the three datasets; (<b>c</b>) FCC image of PC1, PC2, and PC3 of PCA applied to the stacked stretched bands of all the three datasets.</p>
Full article ">Figure 11
<p>(<b>a</b>) Binarized normalized difference vegetation index (NDVI) of &gt;0.50% showing pixels with higher vegetation cover in black color; (<b>b</b>) low vegetation region (72°21′48″ E, 34°19′44″ N); (<b>c</b>) medium vegetation region (72°21′8″ E, 34°20′13″ N).</p>
Full article ">Figure 12
<p>Zoomed-in areas of the study region showing MMLs and granite leases (ML-1–ML4) amongst low vegetated (LV) regions.</p>
Full article ">Figure 13
<p>Petrographic and textural examination and characterization results of the sample rocks. The coordinates of samples: (<b>a</b>) P1: 72°20′31″ E, 34°20′26″ N; (<b>b</b>) P5: 72°24′17″ E, 34°17′51″ N; (<b>c</b>) P7: 72°28′10″ E, 34°17′37″ N; and (<b>d</b>) P6: 72°28′7″ E, 34°17′36″ N.</p>
Full article ">Figure 14
<p>Granite map in the Kotah Dome, Malakand with one component of the stretched Sentinel-2, ASTER, and Landsat-8 data each.</p>
Full article ">Figure 15
<p>Granite mapping in Karoonjhar Mountains in Nagarparkar with one component of the raw and stretched Sentinel-2 datasets and the stretched ASTER dataset each.</p>
Full article ">
28 pages, 48431 KiB  
Article
Fourier Domain Anomaly Detection and Spectral Fusion for Stripe Noise Removal of TIR Imagery
by Qingjie Zeng, Hanlin Qin, Xiang Yan and Tingwu Yang
Remote Sens. 2020, 12(22), 3714; https://doi.org/10.3390/rs12223714 - 12 Nov 2020
Cited by 15 | Viewed by 3167
Abstract
Stripe noise is a common and unwelcome noise pattern in various thermal infrared (TIR) image data including conventional TIR images and remote sensing TIR spectral images. Most existing stripe noise removal (destriping) methods are often difficult to keep a good and robust efficacy [...] Read more.
Stripe noise is a common and unwelcome noise pattern in various thermal infrared (TIR) image data including conventional TIR images and remote sensing TIR spectral images. Most existing stripe noise removal (destriping) methods are often difficult to keep a good and robust efficacy in dealing with the real-life complex noise cases. In this paper, based on the intrinsic spectral properties of TIR images and stripe noise, we propose a novel two-stage transform domain destriping method called Fourier domain anomaly detection and spectral fusion (ADSF). Considering the principal frequencies polluted by stripe noise as outliers in the statistical spectrum of TIR images, our naive idea is first to detect the potential anomalies and then correct them effectively in the Fourier domain to reconstruct a desired destriping result. More specifically, anomaly detection for stripe frequencies is achieved through a regional comparison between the original spectrum and the expected spectrum that statistically follows a generalized Laplacian regression model, and then an anomaly weight map is generated accordingly. In the correction stage, we propose a guidance-image-based spectrum fusion strategy, which integrates the original spectrum and the spectrum of a guidance image via the anomaly weight map. The final reconstruction result not only has no stripe noise but also maintains image structures and details well. Extensive real experiments are performed on conventional TIR images and remote sensing spectral images, respectively. The qualitative and quantitative assessment results demonstrate the superior effectiveness and strong robustness of the proposed method. Full article
(This article belongs to the Special Issue Correction of Remotely Sensed Imagery)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spectral statistics of TIR images. (<b>a</b>) Average power spectrum of the TIR image dataset from [<a href="#B31-remotesensing-12-03714" class="html-bibr">31</a>]; (<b>b</b>) Fitted result via the generalized Laplacian distribution.</p>
Full article ">Figure 2
<p>Spectral distribution of stripe noise. (<b>a</b>) <span class="html-italic">Pentagon</span> image and power spectrum; (<b>b</b>) <span class="html-italic">Pentagon</span> image with periodic stripes and power spectrum; (<b>c</b>) <span class="html-italic">Pentagon</span> image with non-periodic sparse stripes and power spectrum; (<b>d</b>) <span class="html-italic">Pentagon</span> image with non-periodic non-sparse stripes and power spectrum.</p>
Full article ">Figure 3
<p>A schematic overview of the proposed method.</p>
Full article ">Figure 4
<p>The implementation process of anomaly detection.</p>
Full article ">Figure 5
<p>A symmetric tapered region for stripe-related anomaly detection.</p>
Full article ">Figure 6
<p>The implementation process of spectrum fusion.</p>
Full article ">Figure 7
<p>Structure-texture decomposition via IGF on the striped image. (<b>a</b>) Original image; (<b>b</b>) Structural image used as a guidance image; (<b>c</b>) Textural image with stripe noise.</p>
Full article ">Figure 8
<p>Conventional TIR images used in the three experiments. (<b>a</b>) Experiment 1; (<b>b</b>) Experiment 2; (<b>c</b>) Experiment 3.</p>
Full article ">Figure 9
<p>Destriping results of Experiment 1 for conventional TIR imagery. (<b>a</b>) MIRE; (<b>b</b>) 1D-GF; (<b>c</b>) TwSF; (<b>d</b>) DLSNUC; (<b>e</b>) WDNN; (<b>f</b>) Ours.</p>
Full article ">Figure 10
<p>Destriping results of Experiment 2 for conventional TIR imagery. (<b>a</b>) MIRE; (<b>b</b>) 1D-GF; (<b>c</b>) TwSF; (<b>d</b>) DLSNUC; (<b>e</b>) WDNN; (<b>f</b>) Ours.</p>
Full article ">Figure 11
<p>Destriping results of Experiment 3 for conventional TIR imagery. (<b>a</b>) MIRE; (<b>b</b>) 1D-GF; (<b>c</b>) TwSF; (<b>d</b>) DLSNUC; (<b>e</b>) WDNN; (<b>f</b>) Ours.</p>
Full article ">Figure 12
<p>Comparisons of the pixels in a single row from three sets of experimental results. (<b>a</b>) Pixel values in the 20th row from Experiment 1; (<b>b</b>) Pixel values in the 140th row from Experiment 2; (<b>c</b>) Pixel values in the 50th row from Experiment 3.</p>
Full article ">Figure 13
<p>Destriping results in the extreme case for conventional TIR imagery. (<b>a</b>) Original image; (<b>b</b>) MIRE; (<b>c</b>) 1D-GF; (<b>d</b>) TwSF; (<b>e</b>) DLSNUC; (<b>f</b>) WDNN; (<b>g</b>) Ours.</p>
Full article ">Figure 14
<p>Remote sensing spectral images used in the three experiments. (<b>a</b>) Experiment 1; (<b>b</b>) Experiment 2; (<b>c</b>) Experiment 3.</p>
Full article ">Figure 15
<p>Desrtiping results of Experiment 1 for remote sensing spectral images. From left to right: WFAF, SLD, RLSID, DSM-<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> and Ours. From top to bottom: Band 21, 24, 27, 33, 34 and 35 from the MOD021KM product. Readers are recommended to zoom in all figures for better observation.</p>
Full article ">Figure 16
<p>Desrtiping results of Experiment 2 for remote sensing spectral images. From left to right: WFAF, SLD, RLSID, DSM-<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> and Ours. From top to bottom: Band 21, 24, 27, 33, 34 and 35 from the MOD021KM product. Readers are recommended to zoom in all figures for better observation.</p>
Full article ">Figure 17
<p>Desrtiping results of Experiment 3 for remote sensing spectral images. From left to right: Band 27, 34 and 36 from the MOD02SSH product. From top to bottom: WFAF, SLD, LRSID, DSM-<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math> and Ours. Readers are recommended to zoom in all figures for better observation.</p>
Full article ">Figure 18
<p>Visual comparison of partial areas corresponding to <a href="#remotesensing-12-03714-f015" class="html-fig">Figure 15</a>. (<b>a</b>) Original image parts from Band 24, 27 and 35; (<b>b</b>) WFAF; (<b>c</b>) SLD; (<b>d</b>) LRISD; (<b>e</b>) DSM-<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math>; (<b>f</b>) Ours.</p>
Full article ">Figure 19
<p>Visual comparison of partial areas corresponding to <a href="#remotesensing-12-03714-f016" class="html-fig">Figure 16</a>. (<b>a</b>) Original image parts from Band 21, 33 and 34; (<b>b</b>) WFAF; (<b>c</b>) SLD; (<b>d</b>) LRISD; (<b>e</b>) DSM-<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math>; (<b>f</b>) Ours.</p>
Full article ">Figure 20
<p>Mean cross-track profiles of the Band 21 image from Experiment 1. (<b>a</b>) Original; (<b>b</b>) WFAF; (<b>c</b>) SLD; (<b>d</b>) LRISD; (<b>e</b>) DSM-<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math>; (<b>f</b>) Ours.</p>
Full article ">Figure 21
<p>Mean cross-track profiles of the Band 27 image from Experiment 2. (<b>a</b>) Original; (<b>b</b>) WFAF; (<b>c</b>) SLD; (<b>d</b>) LRISD; (<b>e</b>) DSM-<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math>; (<b>f</b>) Ours.</p>
Full article ">Figure 22
<p>Mean row power spectrums of the Band 21 image from Experiment 2. (<b>a</b>) Original; (<b>b</b>) WFAF; (<b>c</b>) SLD; (<b>d</b>) LRISD; (<b>e</b>) DSM-<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math>; (<b>f</b>) Ours.</p>
Full article ">Figure 23
<p>Mean row power spectrums of the Band 27 image from Experiment 2. (<b>a</b>) Original; (<b>b</b>) WFAF; (<b>c</b>) SLD; (<b>d</b>) LRISD; (<b>e</b>) DSM-<math display="inline"><semantics> <msub> <mi>l</mi> <mn>0</mn> </msub> </semantics></math>; (<b>f</b>) Ours.</p>
Full article ">Figure 24
<p>The influence of the parameter <math display="inline"><semantics> <mi>α</mi> </semantics></math> on the destriping performance. (<b>a</b>) Original image; (<b>b</b>) Result with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>c</b>) Result with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>6</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>d</b>) Result with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>10</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>e</b>) Result with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>16</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">
19 pages, 5482 KiB  
Article
Evaluation of Landsat 8 OLI/TIRS Level-2 and Sentinel 2 Level-1C Fusion Techniques Intended for Image Segmentation of Archaeological Landscapes and Proxies
by Athos Agapiou
Remote Sens. 2020, 12(3), 579; https://doi.org/10.3390/rs12030579 - 10 Feb 2020
Cited by 22 | Viewed by 5827
Abstract
The use of medium resolution, open access, and freely distributed satellite images, such as those of Landsat, is still understudied in the domain of archaeological research, mainly due to restrictions of spatial resolution. This investigation aims to showcase how the synergistic use of [...] Read more.
The use of medium resolution, open access, and freely distributed satellite images, such as those of Landsat, is still understudied in the domain of archaeological research, mainly due to restrictions of spatial resolution. This investigation aims to showcase how the synergistic use of Landsat and Sentinel optical sensors can efficiently support archaeological research through object-based image analysis (OBIA), a relatively new scientific trend, as highlighted in the relevant literature, in the domain of remote sensing archaeology. Initially, the fusion of a 30 m spatial resolution Landsat 8 OLI/TIRS Level-2 and a 10 m spatial resolution Sentinel 2 Level-1C optical images, over the archaeological site of “Nea Paphos” in Cyprus, are evaluated in order to improve the spatial resolution of the Landsat image. At this step, various known fusion models are implemented and evaluated, namely Gram–Schmidt, Brovey, principal component analysis (PCA), and hue-saturation-value (HSV) algorithms. In addition, all four 10 m available spectral bands of the Sentinel 2 sensor, namely the blue, green, red, and near-infrared bands (Bands 2 to 4 and Band 8, respectively) were assessed for each of the different fusion models. On the basis of these findings, the next step of the study, focused on the image segmentation process, through the evaluation of different scale factors. The segmentation process is an important step moving from pixel-based to object-based image analysis. The overall results show that the Gram–Schmidt fusion method based on the near-infrared band of the Sentinel 2 (Band 8) at a range of scale factor segmentation to 70 are the optimum parameters for the detection of standing visible monuments, monitoring excavated areas, and detecting buried archaeological remains, without any significant spectral distortion of the original Landsat image. The new 10 m fused Landsat 8 image provides further spatial details of the archaeological site and depicts, through the segmentation process, important details within the landscape under examination. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The archaeological site of “Nea Paphos” located at the western part of Cyprus; (<b>b</b>) area at the northern part of the site, indicating the standing defensive wall (yellow arrows), as well as other archaeological proxies in the area (white arrows); and (<b>c</b>) area at the central part of the archaeological site, where significant archaeological excavations have been carried out in the past (background: orthoimage from the Department of Land and Surveyors, Cyprus).</p>
Full article ">Figure 2
<p>Overall methodology implemented in the study.</p>
Full article ">Figure 3
<p>Fusion results of the whole archaeological site of “Nea Paphos” using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites.</p>
Full article ">Figure 4
<p>Fusion results of the whole archaeological site of “Nea Paphos” (area b, <a href="#remotesensing-12-00579-f001" class="html-fig">Figure 1</a>) using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at the first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites. The standing defensive wall is indicated with yellow arrows, and the archaeological proxies are indicated with white arrows.</p>
Full article ">Figure 5
<p>Fusion results of the whole archaeological site of “Nea Paphos” (area c, <a href="#remotesensing-12-00579-f001" class="html-fig">Figure 1</a>) using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at the first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites. Excavated areas are indicated with yellow arrows and the boundary of the archaeological site with the modern city of Paphos is indicated with white arrows.</p>
Full article ">Figure 6
<p>Segmentation results for area b of <a href="#remotesensing-12-00579-f001" class="html-fig">Figure 1</a> using a scale factor from 10 to 50, with a step of 10, applied to the Gram–Schmidt pansharpened Landsat 8 image (left) and the original Landsat 8 image (right). Groups of pixels (segments) are visualized in both images with a blue polygon. Similar results for scales 60 to 100 are shown in <a href="#remotesensing-12-00579-f007" class="html-fig">Figure 7</a>. The standing defensive wall is indicated with yellow arrows and the archaeological proxies are indicated with black arrows.</p>
Full article ">Figure 7
<p>Segmentation results for area b of <a href="#remotesensing-12-00579-f001" class="html-fig">Figure 1</a> using a scale factor from 60 to 100, with a step of 10, applied at the Gram–Schmidt pansharpened Landsat 8 image (left) and the original Landsat 8 image (right). Groups of pixels (segments) are visualized in both images with a blue polygon. Similar results for scales 10 to 50 are shown in <a href="#remotesensing-12-00579-f006" class="html-fig">Figure 6</a>. The standing defensive wall is indicated with yellow arrows and the archaeological proxies are indicated with black arrows.</p>
Full article ">Figure 8
<p>Various ground truth areas digitized for the needs of the segmentation performance of the fused Landsat image. In the background, the Gram–Schmidt pansharpened Landsat 8 image using the NIR band of Sentinel is shown.</p>
Full article ">Figure 9
<p>(<b>a</b>) Segmentation of the fused Landsat 8, after the application of the Gram–Schmidt method (indicated with yellow polygons) over the high-resolution aerial photograph of the archaeological site of “Nea Paphos”; (<b>b</b>) segmentation of the original Landsat 30 m pixel resolution. For the details of areas (a) to (f) refer to the text.</p>
Full article ">Figure 10
<p>Difference of the segmentation image of the original Landsat 8 image and the fused Gram–Schmidt pansharpened image, at the scale factor 60. For the details of areas (a) to (f) refer to the text.</p>
Full article ">
17 pages, 2408 KiB  
Article
Supervised Detection of Façade Openings in 3D Point Clouds with Thermal Attributes
by Małgorzata Jarząbek-Rychard, Dong Lin and Hans-Gerd Maas
Remote Sens. 2020, 12(3), 543; https://doi.org/10.3390/rs12030543 - 6 Feb 2020
Cited by 28 | Viewed by 3870
Abstract
Targeted energy management and control is becoming an increasing concern in the building sector. Automatic analyses of thermal data, which minimize the subjectivity of the assessment and allow for large-scale inspections, are therefore of high interest. In this study, we propose an approach [...] Read more.
Targeted energy management and control is becoming an increasing concern in the building sector. Automatic analyses of thermal data, which minimize the subjectivity of the assessment and allow for large-scale inspections, are therefore of high interest. In this study, we propose an approach for a supervised extraction of façade openings (windows and doors) from photogrammetric 3D point clouds attributed to RGB and thermal infrared (TIR) information. The novelty of the proposed approach is in the combination of thermal information with other available characteristics of data for a classification performed directly in 3D space. Images acquired in visible and thermal infrared spectra serve as input data for the camera pose estimation and the reconstruction of 3D scene geometry. To investigate the relevance of different information types to the classification performance, a Random Forest algorithm is applied to various sets of computed features. The best feature combination is then used as an input for a Conditional Random Field that enables us to incorporate contextual information and consider the interaction between the points. The evaluation executed on a per-point level shows that the fusion of all available information types together with context consideration allows us to extract objects with 90% completeness and 95% correctness. A respective assessment executed on a per-object level shows 97% completeness and 88% accuracy. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Samples of RGB (<b>left</b>) and thermal infrared (TIR) (<b>right</b>) images collected from one station.</p>
Full article ">Figure 2
<p>Façade 1 (<b>a</b>) and façade 2 (<b>b</b>) datasets: a dense 3D point cloud visualized using RGB (top) and thermal infrared (bottom) representation.</p>
Full article ">Figure 3
<p>Results of façade object extraction performed on different combinations of feature sets ((<b>a</b>) Façade 1; (<b>b</b>) Façade 2).</p>
Full article ">Figure 4
<p>Results of façade object extraction performed on different combinations of feature sets and corresponding reference labelling (left: façade 1; right: façade 2). Three-dimensional points belonging to windows and doors are marked in red.</p>
Full article ">Figure 5
<p>Final results of contextual classification of façade openings. Detected 3D points belonging to windows and doors are marked in green ((<b>a</b>) Façade 1; (<b>b</b>) Façade 2).</p>
Full article ">
23 pages, 9130 KiB  
Article
Modifying an Image Fusion Approach for High Spatiotemporal LST Retrieval in Surface Dryness and Evapotranspiration Estimations
by Tri Wandi Januar, Tang-Huang Lin, Chih-Yuan Huang and Kuo-En Chang
Remote Sens. 2020, 12(3), 498; https://doi.org/10.3390/rs12030498 - 4 Feb 2020
Cited by 10 | Viewed by 3888
Abstract
Thermal infrared (TIR) satellite images are generally employed to retrieve land surface temperature (LST) data in remote sensing. LST data have been widely used in evapotranspiration (ET) estimation based on satellite observations over broad regions, as well as the surface dryness associated with [...] Read more.
Thermal infrared (TIR) satellite images are generally employed to retrieve land surface temperature (LST) data in remote sensing. LST data have been widely used in evapotranspiration (ET) estimation based on satellite observations over broad regions, as well as the surface dryness associated with vegetation index. Landsat-8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) can provide LST data with a 30-m spatial resolution. However, rapid changes in environmental factors, such as temperature, humidity, wind speed, and soil moisture, will affect the dynamics of ET. Therefore, ET estimation needs a high temporal resolution as well as a high spatial resolution for daily, diurnal, or even hourly analysis. A challenge with satellite observations is that higher-spatial-resolution sensors have a lower temporal resolution, and vice versa. Previous studies solved this limitation by developing a spatial and temporal adaptive reflectance fusion model (STARFM) for visible images. In this study, with the primary mechanism (thermal emission) of TIRS, surface emissivity is used in the proposed spatial and temporal adaptive emissivity fusion model (STAEFM) as a modification of the original STARFM for fusing TIR images instead of reflectance. For high a temporal resolution, the advanced Himawari imager (AHI) onboard the Himawari-8 satellite is explored. Thus, Landsat-like TIR images with a 10-minute temporal resolution can be synthesized by fusing TIR images of Himawari-8 AHI and Landsat-8 TIRS. The performance of the STAEFM to retrieve LST was compared with the STARFM and enhanced STARFM (ESTARFM) based on the similarity to the observed Landsat image and differences with air temperature. The peak signal-to-noise ratio (PSNR) value of the STAEFM image is more than 42 dB, while the values for STARFM and ESTARFM images are around 31 and 38 dB, respectively. The differences of LST and air temperature data collected from five meteorological stations are 1.53 °C to 4.93 °C, which are smaller compared with STARFM’s and ESATRFM’s. The examination of the case study showed reasonable results of hourly LST, dryness index, and ET retrieval, indicating significant potential for the proposed STAEFM to provide very-high-spatiotemporal-resolution (30 m every 10 min) TIR images for surface dryness and ET monitoring. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area is a 1063 km<sup>2</sup> (120°09′05″E–120°28′30″E, 23°17′45″N–23°35′45″N) plot in the Chiayi County, Taiwan, mostly covered by agricultural land. The black circles indicate locations of weather stations A (23°34′9.13″N, 120°17′13.2″E), B (23°33′12.6″N, 120°25′13.08″E), C (23°24′47.16″N, 120°18′0.72″E), D (23°20′57.12″N, 120°24′22.68″E), and D (23°18′44.64″N, 120°18′30.96″E).</p>
Full article ">Figure 2
<p>Flowchart of STAEFM approach. NIR, near infrared; SWIR, shortwave infrared; TIR, thermal infrared; mNDVI, modified normalized difference vegetation index.</p>
Full article ">Figure 3
<p>Digital number of Himawari-8 TIR images (<b>a</b>) without sharpening and (<b>b</b>) with sharpening, compared to (<b>c</b>) the digital number of the Landsat-8 TIR image at the same acquisition time (15 November 2018).</p>
Full article ">Figure 4
<p>Results of fused TIR images from STARFM, ESTARFM, and STAEFM approaches.</p>
Full article ">Figure 5
<p>LST (°C) estimated from (<b>a</b>) Landsat TIR, (<b>b</b>) STARFM, (<b>c</b>) ESTARFM, and (<b>d</b>) STAEFM on 18 January 2019.</p>
Full article ">Figure 6
<p>Histograms of LST images from (<b>a</b>) Landsat, (<b>b</b>) STARFM, (<b>c</b>) ESTARFM, and (<b>d</b>) STAEFM on 18 January 2019.</p>
Full article ">Figure 7
<p>Comparison of Landsat TIR images with fused images of (<b>a</b>) STARFM, (<b>b</b>) ESTARFM, and (<b>c</b>) STAEFM after being normalized from 0 to 10,000.</p>
Full article ">Figure 8
<p>Hourly LST images (°C) estimated from STAEFM TIR on 18 January 2019 at (<b>a</b>) 09:00–10:00, (<b>b</b>) 11:00–12:00, and (<b>c</b>) 13:00–14:00, and (<b>d</b>–<b>f</b>) their histograms, respectively.</p>
Full article ">Figure 9
<p>Regression of dry and wet edges estimated from STAEFM TIR on 18 January 2019 at (<b>a</b>) 09:00–10:00, (<b>b</b>) 11:00–12:00, and (<b>c</b>) 13:00–14:00.</p>
Full article ">Figure 10
<p>Same as <a href="#remotesensing-12-00498-f008" class="html-fig">Figure 8</a>, but with the results of temperature vegetation dryness index (TVDI) on 18 January 2019.</p>
Full article ">Figure 11
<p>Same as <a href="#remotesensing-12-00498-f008" class="html-fig">Figure 8</a>, but with the results of actual evapotranspiration (ET) (mm/hour) on 18 January 2019.</p>
Full article ">Figure 12
<p>Correlation of hourly actual ET estimated from STAEFM TIR on 18 January 2019 with (<b>a</b>) albedo, (<b>b</b>) air temperature, (<b>c</b>) relative humidity, and (<b>d</b>) wind speed.</p>
Full article ">
41 pages, 11096 KiB  
Article
Mapping Listvenite Occurrences in the Damage Zones of Northern Victoria Land, Antarctica Using ASTER Satellite Remote Sensing Data
by Amin Beiranvand Pour, Yongcheol Park, Laura Crispini, Andreas Läufer, Jong Kuk Hong, Tae-Yoon S. Park, Basem Zoheir, Biswajeet Pradhan, Aidy M. Muslim, Mohammad Shawkat Hossain and Omeid Rahmani
Remote Sens. 2019, 11(12), 1408; https://doi.org/10.3390/rs11121408 - 13 Jun 2019
Cited by 65 | Viewed by 7152
Abstract
Listvenites normally form during hydrothermal/metasomatic alteration of mafic and ultramafic rocks and represent a key indicator for the occurrence of ore mineralizations in orogenic systems. Hydrothermal/metasomatic alteration mineral assemblages are one of the significant indicators for ore mineralizations in the damage zones of [...] Read more.
Listvenites normally form during hydrothermal/metasomatic alteration of mafic and ultramafic rocks and represent a key indicator for the occurrence of ore mineralizations in orogenic systems. Hydrothermal/metasomatic alteration mineral assemblages are one of the significant indicators for ore mineralizations in the damage zones of major tectonic boundaries, which can be detected using multispectral satellite remote sensing data. In this research, Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) multispectral remote sensing data were used to detect listvenite occurrences and alteration mineral assemblages in the poorly exposed damage zones of the boundaries between the Wilson, Bowers and Robertson Bay terranes in Northern Victoria Land (NVL), Antarctica. Spectral information for detecting alteration mineral assemblages and listvenites were extracted at pixel and sub-pixel levels using the Principal Component Analysis (PCA)/Independent Component Analysis (ICA) fusion technique, Linear Spectral Unmixing (LSU) and Constrained Energy Minimization (CEM) algorithms. Mineralogical assemblages containing Fe2+, Fe3+, Fe-OH, Al-OH, Mg-OH and CO3 spectral absorption features were detected in the damage zones of the study area by implementing PCA/ICA fusion to visible and near infrared (VNIR) and shortwave infrared (SWIR) bands of ASTER. Silicate lithological groups were mapped and discriminated using PCA/ICA fusion to thermal infrared (TIR) bands of ASTER. Fraction images of prospective alteration minerals, including goethite, hematite, jarosite, biotite, kaolinite, muscovite, antigorite, serpentine, talc, actinolite, chlorite, epidote, calcite, dolomite and siderite and possible zones encompassing listvenite occurrences were produced using LSU and CEM algorithms to ASTER VNIR+SWIR spectral bands. Several potential zones for listvenite occurrences were identified, typically in association with mafic metavolcanic rocks (Glasgow Volcanics) in the Bowers Mountains. Comparison of the remote sensing results with geological investigations in the study area demonstrate invaluable implications of the remote sensing approach for mapping poorly exposed lithological units, detecting possible zones of listvenite occurrences and discriminating subpixel abundance of alteration mineral assemblages in the damage zones of the Wilson-Bowers and Bowers-Robertson Bay terrane boundaries and in intra-Bowers and Wilson terranes fault zones with high fluid flow. The satellite remote sensing approach developed in this research is explicitly pertinent to detecting key alteration mineral indicators for prospecting hydrothermal/metasomatic ore minerals in remote and inaccessible zones situated in other orogenic systems around the world. Full article
Show Figures

Figure 1

Figure 1
<p>Geological map of Northern Victoria Land (NVL). Modified from [<a href="#B7-remotesensing-11-01408" class="html-bibr">7</a>,<a href="#B48-remotesensing-11-01408" class="html-bibr">48</a>] based on the GIGAMAP series [<a href="#B52-remotesensing-11-01408" class="html-bibr">52</a>]. The black rectangle shows the coverage of ASTER images used in this study.</p>
Full article ">Figure 2
<p>(<b>A</b>) Sketch map with the location of the fault (damage) zones where the hydrothermal alteration and veining is more intense (modified from [<a href="#B51-remotesensing-11-01408" class="html-bibr">51</a>]). (<b>B</b>) Sketch map of lithological-mineralogical sequences in the damage (fault) zone at the boundary of WT and BT (modified from [<a href="#B44-remotesensing-11-01408" class="html-bibr">44</a>]).</p>
Full article ">Figure 3
<p>ASTER mosaic of the Bowers Terrane (BT) and surrounding area (the coverage of ASTER images used in this study). Magenta rectangles indicate selected subsets of fault zones (zones 1–6) for detailed mineral mapping.</p>
Full article ">Figure 4
<p>An overview of the methodological flowchart used in this study. Characterizations of the techniques can be found in ENVI Tutorials. Research Systems, Inc., Boulder, CO [<a href="#B81-remotesensing-11-01408" class="html-bibr">81</a>].</p>
Full article ">Figure 5
<p>Laboratory reflectance spectra of the selected alteration minerals resampled to response functions of ASTER VINR+SWIR bands [<a href="#B117-remotesensing-11-01408" class="html-bibr">117</a>]. Cubes show the location of the ASTER VINR+SWIR bands (B1 to B9) from 0.50 μm to 2.5 μm. (<b>A</b>) goethite; (<b>B</b>) hematite; (<b>C</b>) jarosite; (<b>D</b>) biotite; (<b>E</b>) kaolinite; (<b>F</b>) muscovite; (<b>G</b>) antigorite; (<b>H)</b> serpentine; (<b>I</b>) talc; (<b>J</b>) actinolite; (<b>K</b>) chlorite; (<b>L</b>) epidote; (<b>M</b>) calcite; (<b>N</b>) dolomite; (<b>P</b>) siderite; and (<b>Q</b>) chalcedony.</p>
Full article ">Figure 5 Cont.
<p>Laboratory reflectance spectra of the selected alteration minerals resampled to response functions of ASTER VINR+SWIR bands [<a href="#B117-remotesensing-11-01408" class="html-bibr">117</a>]. Cubes show the location of the ASTER VINR+SWIR bands (B1 to B9) from 0.50 μm to 2.5 μm. (<b>A</b>) goethite; (<b>B</b>) hematite; (<b>C</b>) jarosite; (<b>D</b>) biotite; (<b>E</b>) kaolinite; (<b>F</b>) muscovite; (<b>G</b>) antigorite; (<b>H)</b> serpentine; (<b>I</b>) talc; (<b>J</b>) actinolite; (<b>K</b>) chlorite; (<b>L</b>) epidote; (<b>M</b>) calcite; (<b>N</b>) dolomite; (<b>P</b>) siderite; and (<b>Q</b>) chalcedony.</p>
Full article ">Figure 6
<p>ASTER mosaic image of Fe-MI, Al-OH-MI and Fe,Mg-OH-MI as RGB color composite. It shows exposed lithologies in the Bowers Terrane (BT) and surrounding areas. The locations of some large mountain ranges are shown.</p>
Full article ">Figure 7
<p>ASTER image maps derived from the ICA rotation and subsequent RGB color composite to VNIR+SWIR bands. (<b>A</b>) Spatial subset covering zone 1 and surrounding areas; (<b>B</b>) Spatial subset covering zones 2 and 3; (<b>C</b>) Spatial subset covering zone 4 and surrounding areas; and (<b>D</b>) Spatial subset covering zones 5 and 6.</p>
Full article ">Figure 8
<p>ASTER image maps derived from the ICA rotation and subsequent RGB color composite to TIR bands. (<b>A</b>) Spatial subset covering zone 1 and surrounding areas; (<b>B</b>) Spatial subset covering zones 2 and 3; (<b>C</b>) Spatial subset covering zone 4 and surrounding areas; and (<b>D</b>) Spatial subset covering zones 5 and 6.</p>
Full article ">Figure 9
<p>End-member (mean) spectra extracted from ASTER VINR+SWIR bands using the n-Dimensional analysis technique for the six selected subsets of the damage (fault) zones. (<b>A</b>) Zone 1; (<b>B</b>) Zone 2; (<b>C</b>) Zone 3; (<b>D</b>) Zone 4; (<b>E</b>) Zone 5; and (<b>F</b>) Zone 6. ASTER band center positions are shown for the selected zones.</p>
Full article ">Figure 10
<p>LSU classification mineral maps derived from fraction images of the extracted end-members. (<b>A</b>) Zone 1; (<b>B</b>) Zone 2; (<b>C</b>) Zone 3; (<b>D</b>) Zone 4; (<b>E</b>) Zone 5; and (<b>F</b>) Zone 6. Spectrally dominant mineral groups (concertation more than 10%) are depicted as colored pixels.</p>
Full article ">Figure 10 Cont.
<p>LSU classification mineral maps derived from fraction images of the extracted end-members. (<b>A</b>) Zone 1; (<b>B</b>) Zone 2; (<b>C</b>) Zone 3; (<b>D</b>) Zone 4; (<b>E</b>) Zone 5; and (<b>F</b>) Zone 6. Spectrally dominant mineral groups (concertation more than 10%) are depicted as colored pixels.</p>
Full article ">Figure 11
<p>Fraction images of the selected end-member minerals derived from CEM algorithm for zone (2) covering the Lanterman Range and surrounding areas. Pseudo-color ramp was applied to greyscale rule images.</p>
Full article ">Figure 12
<p>Field photographs of the typical altered rocks and listvenites. (<b>A</b>) Helicopter view of the fault zone characterized by epidote–prehnite–quartz coatings, and epidotization in Glasgow Volcanics country rocks, Mt Gow, Bowers Mountains; (<b>B</b>) Helicopter view of the reddish (iron oxide/hydroxide minerals) to greenish (chlorite-epidote minerals) alterations of Glasgow Volcanics along fault zones, NE slopes of Mt Gow, Bowers Mountains; (<b>C</b>) Helicopter view of the metasomatic alteration (carbonate dominated) of Glasgow Volcanics around a quartz–carbonate fault vein system (Dorn Glacier, for details see [<a href="#B1-remotesensing-11-01408" class="html-bibr">1</a>]; (<b>D</b>) Close view of reddish alteration (iron oxide/hydroxide minerals dominated) along fault zones in Glasgow Volcanics, NE slopes of Mt Gow, Bowers Mountains; (<b>E</b>) Close view of shear zone with magnesite-talc-quartz mylonite derived from mafic and ultramafic rocks at the WT-BT boundary, Lanterman Range; (<b>F</b>) View of layers of alternating foliated ultramafite and amphibolite within mylonitic shear zone characterised by listvenite and magnesite-talc-quartz rich mylonite, WT-BT boundary, Lanterman Range; (<b>G</b>) Close view of listvenites from a shear zone at the WT-BT boundary, Lanterman Range. The foliated green rock is rich in Cr-muscovite and chlorite, the light brown part is rich in Fe-Mg carbonates; (<b>H</b>) Close view of a damage zone with intense epidotization (greenish to pinkish color) and silicification of the host Glasgow Volcanics Mt Gow, Bowers Mountains.</p>
Full article ">Figure 12 Cont.
<p>Field photographs of the typical altered rocks and listvenites. (<b>A</b>) Helicopter view of the fault zone characterized by epidote–prehnite–quartz coatings, and epidotization in Glasgow Volcanics country rocks, Mt Gow, Bowers Mountains; (<b>B</b>) Helicopter view of the reddish (iron oxide/hydroxide minerals) to greenish (chlorite-epidote minerals) alterations of Glasgow Volcanics along fault zones, NE slopes of Mt Gow, Bowers Mountains; (<b>C</b>) Helicopter view of the metasomatic alteration (carbonate dominated) of Glasgow Volcanics around a quartz–carbonate fault vein system (Dorn Glacier, for details see [<a href="#B1-remotesensing-11-01408" class="html-bibr">1</a>]; (<b>D</b>) Close view of reddish alteration (iron oxide/hydroxide minerals dominated) along fault zones in Glasgow Volcanics, NE slopes of Mt Gow, Bowers Mountains; (<b>E</b>) Close view of shear zone with magnesite-talc-quartz mylonite derived from mafic and ultramafic rocks at the WT-BT boundary, Lanterman Range; (<b>F</b>) View of layers of alternating foliated ultramafite and amphibolite within mylonitic shear zone characterised by listvenite and magnesite-talc-quartz rich mylonite, WT-BT boundary, Lanterman Range; (<b>G</b>) Close view of listvenites from a shear zone at the WT-BT boundary, Lanterman Range. The foliated green rock is rich in Cr-muscovite and chlorite, the light brown part is rich in Fe-Mg carbonates; (<b>H</b>) Close view of a damage zone with intense epidotization (greenish to pinkish color) and silicification of the host Glasgow Volcanics Mt Gow, Bowers Mountains.</p>
Full article ">Figure 13
<p>Different types of alteration mineralogy in hydrothermally altered rocks. Microphotographs of (<b>A</b>) epidote+prehnite+chlorite-dominated alteration in the damage zone of a fault in Glasgow Volcanics (plane-polarized light; #09.12.03GL 9); (<b>B</b>) carbonated basalt in the damage zone of a shear zone (Dorn Glacier—plane-polarized light; #08.12.05GL6); and (<b>C</b>,<b>D</b>) foliated listvenites from two brittle-ductile shear zones at the WT-BT boundary (<b>C</b>: crossed-polarized light, #23.12.96 GL 6; <b>D</b>: plane-polarized light, #19.12.96 GL 6). Abbreviation: Ep = epidote, Prh = prehnite, Chl = chlorite, Cb = carbonate, Wmica = white mica, Qtz = quartz.</p>
Full article ">Figure 14
<p>Representative XRD analysis of samples collected from listvenites (<b>A</b>–<b>C</b>) and altered Glasgow volcanics (reddish to greenish alterations) at Mt Gow, Bowers Mountains (<b>D</b>–<b>E</b>).</p>
Full article ">Figure 14 Cont.
<p>Representative XRD analysis of samples collected from listvenites (<b>A</b>–<b>C</b>) and altered Glasgow volcanics (reddish to greenish alterations) at Mt Gow, Bowers Mountains (<b>D</b>–<b>E</b>).</p>
Full article ">
16 pages, 2903 KiB  
Article
Downscaling of Surface Soil Moisture Retrieval by Combining MODIS/Landsat and In Situ Measurements
by Chenyang Xu, John J. Qu, Xianjun Hao, Michael H. Cosh, John H. Prueger, Zhiliang Zhu and Laurel Gutenberg
Remote Sens. 2018, 10(2), 210; https://doi.org/10.3390/rs10020210 - 1 Feb 2018
Cited by 56 | Viewed by 8176
Abstract
Soil moisture, especially surface soil moisture (SSM), plays an important role in the development of various natural hazards that result from extreme weather events such as drought, flooding, and landslides. There have been many remote sensing methods for soil moisture retrieval based on [...] Read more.
Soil moisture, especially surface soil moisture (SSM), plays an important role in the development of various natural hazards that result from extreme weather events such as drought, flooding, and landslides. There have been many remote sensing methods for soil moisture retrieval based on microwave or optical thermal infrared (TIR) measurements. TIR remote sensing has been popular for SSM retrieval due to its fine spatial and temporal resolutions. However, because of limitations in the penetration of optical TIR radiation and cloud cover, TIR methods can only be used under clear sky conditions. Microwave SSM retrieval is based on solid physical principles, and has advantages in cases of cloud cover, but it has low spatial resolution. For applications at the local scale, SSM data at high spatial and temporal resolutions are important, especially for agricultural management and decision support systems. Current remote sensing measurements usually have either a high spatial resolution or a high temporal resolution, but not both. This study aims to retrieve SSM at both high spatial and temporal resolutions through the fusion of Moderate Resolution Imaging Spectroradiometer (MODIS) and Land Remote Sensing Satellite (Landsat) data. Based on the universal triangle trapezoid, this study investigated the relationship between land surface temperature (LST) and the normalized difference vegetation index (NDVI) under different soil moisture conditions to construct an improved nonlinear model for SSM retrieval with LST and NDVI. A case study was conducted in Iowa, in the United States (USA) (Lat: 42.2°~42.7°, Lon: −93.6°~−93.2°), from 1 May 2016 to 31 August 2016. Daily SSM in an agricultural area during the crop-growing season was downscaled to 120-m spatial resolution by fusing Landsat 8 with MODIS, with an R2 of 0.5766, and RMSE from 0.0302 to 0.1124 m3/m3. Full article
(This article belongs to the Special Issue Retrieval, Validation and Application of Satellite Soil Moisture Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location of South Fork Experimental Watershed (from Michael H. Cosh); (<b>b</b>) the locations of permanent (red) and temporary (blue) sites in the study area.</p>
Full article ">Figure 2
<p>The flow chart for fusing MODIS and Landsat 8 measurements for surface soil moisture (SSM) retrieval.</p>
Full article ">Figure 3
<p>Universal triangle relationship between soil moisture, temperature, and the normalized difference vegetation index (NDVI) [<a href="#B30-remotesensing-10-00210" class="html-bibr">30</a>].</p>
Full article ">Figure 4
<p>Histogram of the difference between retrieved land surface temperature (LST) and MODIS LST products.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>g</b>) represent the scatter plot between the retrieved LST and LST products at the seven Landsat passing days correspondingly.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>–<b>g</b>) represent the scatter plot between the retrieved LST and LST products at the seven Landsat passing days correspondingly.</p>
Full article ">Figure 6
<p>Histogram of the difference between predicted SSM and observed SSM.</p>
Full article ">Figure 7
<p>Results of the predicted SSM with the observed SSM.</p>
Full article ">Figure 8
<p>Time series of predicted and observed SSM.</p>
Full article ">
Back to TopTop