[go: up one dir, main page]

 
 
remotesensing-logo

Journal Browser

Journal Browser

Widespread Applications Based on Hyperspectral Technologies from Space

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 August 2020) | Viewed by 42357

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Methodologies for Environmental Analysis (IMAA), National Research Council (CNR), C.da S. Loja, 85050 Tito, PZ, Italy
Interests: hyperspectral remote sensing VSWIR-LWIR; sensor data calibration and pre-processing; field spectroscopy; retrieval of surfaces parameters; soil spectral characterization and geology; archaeological site analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Istituto Nazionale di Geofisica e Vulcanologia (INGV), National Earthquake Observatory, Rome, Italy
Interests: airborne and space imaging spectrometers acquiring data in the VSWIR-LWIR; technical characteristics and requirements for geophysical; geological applications; retrieval algorithms for surface temperature and volcanic gas emissions; space and ground data integration for cultural heritage preservation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
Interests: hyperspectral remote sensing; dynamic monitoring of global resource environment with remote sensing; intelligent interpretation of remotely sensed big data
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Hyperspectral spaceborne missions have been acquiring images for over a decade, with hundreds of contiguous spectral (from VSWIR to LWIR) bands worldwide, supporting the development of a wide range of environmental applications.

Hyperion, Chris, HJ-1A and others, have highlighted both criticalities and the opportunities offered by the use of spaceborne hyperspectral technologies in the development of environmental products.

From 2017, a significant number of new orbit missions (such as GF-5, EnMAP, PRISMA, CCRSS, ECOSTRESS) will become available, giving scientists the new challenging scenario of a hyperspectral sensor constellation acquiring data at global scale with a reduced time frequency.

Moreover, according to scientific literature, more complex hyperspectral missions are under development/study (e.g., HYSPIRI, HISUI, HYPXIM, Shalom).

The aim of this Special Issue is to highlight the impact of the past hyperspectral missions and foresee the effectiveness of the future ones. The Special Issue will reflect on the experiences learnt in the past and present missions and perspectives potentially offered by the advent of the new ones. This could be achieved by describing how the retrieval of surface parameters and the understanding of surface phenomena can be enhanced by the availability of the new hyperspectral spaceborne missions.

Some scientific challenges relate to the development of land surface (including coastal systems) products that will benefit from the upcoming hyperspectral resources, especially when combined with available EO data.

Therefore, we would like to invite submissions on the following topics:

  • Integration and comparison of new hyperspectral image data/constellation;
  • Natural processes and human activities and their interactions, including archaeology;
  • Environmental and natural hazards and risks reduction;
  • Coastal systems, including inlands waters, and their interaction with the land;
  • Geology, soil and agriculture;
  • Atmospheric correction and atmospheric constituent characterization;
  • Hyperspectral data processing for defence and security;
  • Astrophysics and planetary exploration;
  • Hyperspectral sensors synergy with the other missions;
  • Sensor calibration including vicarious calibration.

Authors are required to check and follow the specific Instructions to Authors, https://www.mdpi.com/journal/remotesensing/instructions.

Dr. Stefano Pignatti
Dr. Maria Fabrizia Buongiorno
Dr. Bing Zhang
Guest Editors

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3189 KiB  
Article
Adapting Satellite Soundings for Operational Forecasting within the Hazardous Weather Testbed
by Rebekah B. Esmaili, Nadia Smith, Emily B. Berndt, John F. Dostalek, Brian H. Kahn, Kristopher White, Christopher D. Barnet, William Sjoberg and Mitchell Goldberg
Remote Sens. 2020, 12(5), 886; https://doi.org/10.3390/rs12050886 - 10 Mar 2020
Cited by 22 | Viewed by 5225
Abstract
In this paper, we describe how researchers and weather forecasters work together to make satellite sounding data sets more useful in severe weather forecasting applications through participation in National Oceanic and Atmospheric Administration (NOAA)’s Hazardous Weather Testbed (HWT) and JPSS Proving Ground and [...] Read more.
In this paper, we describe how researchers and weather forecasters work together to make satellite sounding data sets more useful in severe weather forecasting applications through participation in National Oceanic and Atmospheric Administration (NOAA)’s Hazardous Weather Testbed (HWT) and JPSS Proving Ground and Risk Reduction (PGRR) program. The HWT provides a forum for collaboration to improve products ahead of widespread operational deployment. We found that the utilization of the NOAA-Unique Combined Atmospheric Processing System (NUCAPS) soundings was improved when the product developer and forecaster directly communicated to overcome misunderstandings and to refine user requirements. Here we share our adaptive strategy for (1) assessing when and where NUCAPS soundings improved operational forecasts by using real, convective case studies and (2) working to increase NUCAPS utilization by improving existing products through direct, face-to-face interaction. Our goal is to discuss the lessons we learned and to share both our successes and challenges working with the weather forecasting community in designing, refining, and promoting novel products. We foresee that our experience in the NUCAPS product development life cycle may be relevant to other communities who can then build on these strategies to transition their products from research to operations (and operations back to research) within the satellite meteorological community. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>High-level flow chart of the step-wise NOAA-Unique Combined Atmospheric Processing System (NUCAPS) retrieval algorithm that outputs temperature (T), moisture (q) and trace gases. In Advanced Weather Interactive Processing Systems (AWIPS), NUCAPS retrievals of T, q and ozone (O3) are color-coded as red, yellow and green to indicate if and when they failed quality control checks. Steps B and D, which are yellow black text, are regression steps, and if they fail they will be flagged as yellow in AWIPS; these retrievals should be used with caution. Steps A, C, and E, which are red with white text, are cloud clearing or retrieval stages of the algorithm. If any of these fails, the retrieval is unlikely to yield meaningful results, and they will be flagged red in AWIPS. The entire algorithm runs regardless if any one step passes or fails.</p>
Full article ">Figure 2
<p>The four NUCAPS products demonstrated in the 2019 Hazardous Weather Testbed Experimental Forecast Program: Baseline NUCAPS soundings in (<b>a</b>) plan view with quality flags. The NSHARP display of (<b>b</b>) baseline NUCAPS soundings and (<b>c</b>) modified soundings northeast of Bismarck, ND on May 15, 2019 ahead of a low-level moisture gradient; (<b>d</b>) gridded NUCAPS showing 2FHAG Temperature on June 3, 2019; and (<b>e</b>) NUCAPS-Forecast on May 10, 2019 showing CAPE gradients five hours past initialization.</p>
Full article ">Figure 3
<p>Responses to “How helpful were the following NUCAPS products to making your forecast(s)?”.</p>
Full article ">Figure 4
<p>Reponses to the question (<b>a</b>) “Did you use NUCAPS products as a component in your decision to issue a warning or Special Weather Statement?” and the question (<b>b</b>) “Which product(s) factored into your decision process?”.</p>
Full article ">Figure 5
<p>Reponses to the question “Which of the following NUCAPS profiles did you use?”.</p>
Full article ">Figure 6
<p>Reponses to the question “If convection initiated, did NUCAPS-Forecast provide skill in determining the eventual convective intensity, convective mode, and type of severe weather produced?”.</p>
Full article ">Figure 7
<p>Reponses to the question “Did any of the following prevent you from using NUCAPS products in your analysis?”.</p>
Full article ">Figure 8
<p>Reponses to the question “How often would you use NUCAPS in the future?”.</p>
Full article ">
18 pages, 6016 KiB  
Article
A Novel Tri-Training Technique for the Semi-Supervised Classification of Hyperspectral Images Based on Regularized Local Discriminant Embedding Feature Extraction
by Depin Ou, Kun Tan, Qian Du, Jishuai Zhu, Xue Wang and Yu Chen
Remote Sens. 2019, 11(6), 654; https://doi.org/10.3390/rs11060654 - 18 Mar 2019
Cited by 24 | Viewed by 3822
Abstract
This paper introduces a novel semi-supervised tri-training classification algorithm based on regularized local discriminant embedding (RLDE) for hyperspectral imagery. In this algorithm, the RLDE method is used for optimal feature information extraction, to solve the problems of singular values and over-fitting, [...] Read more.
This paper introduces a novel semi-supervised tri-training classification algorithm based on regularized local discriminant embedding (RLDE) for hyperspectral imagery. In this algorithm, the RLDE method is used for optimal feature information extraction, to solve the problems of singular values and over-fitting, which are the main problems in the local discriminant embedding (LDE) and local Fisher discriminant analysis (LFDA) methods. An active learning method is then used to select the most useful and informative samples from the candidate set. In the experiments undertaken in this study, the three base classifiers were multinomial logistic regression (MLR), k-nearest neighbor (KNN), and random forest (RF). To confirm the effectiveness of the proposed RLDE method, experiments were conducted on two real hyperspectral datasets (Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Reflective Optics System Imaging Spectrometer (ROSIS)), and the proposed RLDE tri-training algorithm was compared with its counterparts of tri-training alone, LDE, and LFDA. The experiments confirmed that the proposed approach can effectively improve the classification accuracy for hyperspectral imagery. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Pseudocolor composite of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Indian Pines dataset. (<b>b</b>) The test area with 16 mutually exclusive ground-truth classes.</p>
Full article ">Figure 2
<p>(<b>a</b>) Pseudocolor composite of the Reflective Optics System Imaging Spectrometer (ROSIS) Pavia scene. (<b>b</b>) The test area with nine mutually exclusive ground-truth classes.</p>
Full article ">Figure 3
<p>The results of cooperative training classification based on RLDE local feature extraction.</p>
Full article ">Figure 4
<p>AVIRIS data classification accuracy, as obtained by the different feature extraction methods under different initial training samples.</p>
Full article ">Figure 4 Cont.
<p>AVIRIS data classification accuracy, as obtained by the different feature extraction methods under different initial training samples.</p>
Full article ">Figure 5
<p>Co-training classification results based on the different feature extraction methods.</p>
Full article ">Figure 5 Cont.
<p>Co-training classification results based on the different feature extraction methods.</p>
Full article ">Figure 6
<p>AVIRIS data classification accuracy, as obtained by the different feature extraction methods under different initial training samples.</p>
Full article ">Figure 6 Cont.
<p>AVIRIS data classification accuracy, as obtained by the different feature extraction methods under different initial training samples.</p>
Full article ">Figure 7
<p>Co-training classification results based on the different feature extraction methods.</p>
Full article ">Figure 7 Cont.
<p>Co-training classification results based on the different feature extraction methods.</p>
Full article ">Figure 8
<p>Overall accuracy (OA) versus <span class="html-italic">w</span> and <math display="inline"><semantics> <mrow> <msub> <mi>γ</mi> <mn>0</mn> </msub> </mrow> </semantics></math> for the AVIRIS and ROSIS datasets.</p>
Full article ">Figure 9
<p>OA versus <span class="html-italic">α</span> for the AVIRIS and ROSIS datasets.</p>
Full article ">Figure 10
<p>AVIRIS data and ROSIS data classification accuracy for different feature dimension, as obtained by the different feature extraction methods under different initial training samples.</p>
Full article ">
16 pages, 4696 KiB  
Article
Improving Remote Sensing Image Super-Resolution Mapping Based on the Spatial Attraction Model by Utilizing the Pansharpening Technique
by Peng Wang, Gong Zhang, Siyuan Hao and Liguo Wang
Remote Sens. 2019, 11(3), 247; https://doi.org/10.3390/rs11030247 - 26 Jan 2019
Cited by 7 | Viewed by 3621
Abstract
The spatial distribution information of remote sensing images can be derived by the super-resolution mapping (SRM) technique. Super-resolution mapping, based on the spatial attraction model (SRMSAM), has been an important SRM method, due to its simplicity and explicit physical meanings. However, the resolution [...] Read more.
The spatial distribution information of remote sensing images can be derived by the super-resolution mapping (SRM) technique. Super-resolution mapping, based on the spatial attraction model (SRMSAM), has been an important SRM method, due to its simplicity and explicit physical meanings. However, the resolution of the original remote sensing image is coarse, and the existing SRMSAM cannot take full advantage of the spatial–spectral information from the original image. To utilize more spatial–spectral information, improving remote sensing image super-resolution mapping based on the spatial attraction model by utilizing the pansharpening technique (SRMSAM-PAN) is proposed. In SRMSAM-PAN, a novel processing path, named the pansharpening path, is added to the existing SRMSAM. The original coarse remote sensing image is first fused with the high-resolution panchromatic image from the same area by the pansharpening technique in the novel pansharpening path, and the improved image is unmixed to obtain the novel fine-fraction images. The novel fine-fraction images from the pansharpening path and the existing fine-fraction images from the existing path are then integrated to produce finer-fraction images with more spatial–spectral information. Finally, the values predicted from the finer-fraction images are utilized to allocate class labels to all subpixels, to achieve the final mapping result. Experimental results show that the proposed SRMSAM-PAN can obtain a higher mapping accuracy than the existing SRMSAM methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Example of spatial correlation. (<b>a</b>) Spectral unmixing result of Class A. (<b>b</b>) Probability of Distribution 1. (<b>c</b>) Probability of Distribution 2.</p>
Full article ">Figure 2
<p>The flowchart of super-resolution mapping, based on the spatial attraction model (SRMSAM).</p>
Full article ">Figure 3
<p>Euclidean distance. (<b>a</b>) Central subpixel <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>n</mi> </msub> </mrow> </semantics></math> and its eight neighboring pixels. (<b>b</b>) Central subpixel <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>n</mi> </msub> </mrow> </semantics></math> and its eight neighboring subpixels.</p>
Full article ">Figure 4
<p>The flowchart of the principal component analysis (PCA) pansharpening.</p>
Full article ">Figure 5
<p>The flowchart of the pansharpening path.</p>
Full article ">Figure 6
<p>The flowchart of improving remote sensing image super-resolution mapping based on the spatial attraction model by utilizing the pansharpening technique (SRMSAM-PAN).</p>
Full article ">Figure 7
<p>(<b>a</b>) RGB composites of images (bands 19, 30, and 44 for red, green, and blue, respectively). (<b>b</b>) Coarse image (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>). (<b>c</b>) Panchromatic image. (<b>d</b>) Pansharpening result.</p>
Full article ">Figure 8
<p>SRMSAM results in Experiment 1 (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>). (<b>a</b>) Reference image. (<b>b</b>) Subpixel/pixel spatial attraction model (SPSAM). (<b>c</b>) Subpixel/subpixel spatial attraction model (MSPSAM). (<b>d</b>) Hybrid spatial attraction model (HSAM). (<b>e</b>) SRMSAM-PAN.</p>
Full article ">Figure 9
<p>(<b>a</b>) RGB composites of image (bands 102, 56, and 31 for red, green, and blue, respectively). (<b>b</b>) Coarse image (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>). (<b>c</b>) Panchromatic image. (<b>d</b>) Pansharpening result.</p>
Full article ">Figure 10
<p>SRMSAM results in Experiment 2 (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>). (<b>a</b>) Reference image. (<b>b</b>) SPSAM. (<b>c</b>) MSPSAM. (<b>d</b>) HSAM. (<b>e</b>) SRMSAM-PAN.</p>
Full article ">Figure 11
<p>(<b>a</b>) Overall accuracy (OA (%) of the four methods in relation to the zoom factor <math display="inline"><semantics> <mi>S</mi> </semantics></math>. (<b>b</b>) Kappa coefficient (Kappa) of the four methods in relation to the zoom factor <math display="inline"><semantics> <mi>S</mi> </semantics></math>.</p>
Full article ">Figure 12
<p>(<b>a</b>) RGB composites of the image (bands 102, 56, and 31 for red, green, and blue, respectively). (<b>b</b>) Coarse image (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>) (<b>c</b>) Panchromatic image. (<b>d</b>) Pansharpening result.</p>
Full article ">Figure 13
<p>SRMSAM results in Experiment 3 (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>). (<b>a</b>) Reference image. (<b>b</b>) SPSAM. (<b>c</b>) MSPSAM. (<b>d</b>) HSAM. (<b>e</b>) SRMSAM-PAN.</p>
Full article ">Figure 14
<p>(<b>a</b>) OA (%) of the four methods in relation to the zoom factor <math display="inline"><semantics> <mi>S</mi> </semantics></math>. (<b>b</b>) Kappa of the four methods in relation to the zoom factor <math display="inline"><semantics> <mi>S</mi> </semantics></math>.</p>
Full article ">Figure 15
<p>(<b>a</b>) OA (%) of SRMSAM-PAN in relation to the weight parameter <math display="inline"><semantics> <mi>α</mi> </semantics></math> in Experiment 2 (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>). (<b>b</b>) OA (%) of SRMSAM-PAN in relation to the weight parameter <math display="inline"><semantics> <mi>α</mi> </semantics></math> in Experiment 3 (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 16
<p>(<b>a</b>) OA (%) of SRMSAM-PAN and pansharpening technique then classification (PTC) in experiment 2 (<b>b</b>) OA (%) of SRMSAM-PAN and PTC in experiment 3.</p>
Full article ">Figure 17
<p>(<b>a</b>) OA (%) of the SRMSAM-PAN results in relation to the band-dependent spatial detail (BDSD) and PCA in Experiment 2. (<b>b</b>) OA (%) of the SRMSAM-PAN result in relation to BDSD and PCA in Experiment 3.</p>
Full article ">
24 pages, 13289 KiB  
Article
Hyperspectral Image Super-Resolution Inspired by Deep Laplacian Pyramid Network
by Zhi He and Lin Liu
Remote Sens. 2018, 10(12), 1939; https://doi.org/10.3390/rs10121939 - 2 Dec 2018
Cited by 24 | Viewed by 5109
Abstract
Existing hyperspectral sensors usually produce high-spectral-resolution but low-spatial-resolution images, and super-resolution has yielded impressive results in improving the resolution of the hyperspectral images (HSIs). However, most of the super-resolution methods require multiple observations of the same scene and improve the spatial resolution without [...] Read more.
Existing hyperspectral sensors usually produce high-spectral-resolution but low-spatial-resolution images, and super-resolution has yielded impressive results in improving the resolution of the hyperspectral images (HSIs). However, most of the super-resolution methods require multiple observations of the same scene and improve the spatial resolution without fully considering the spectral information. In this paper, we propose an HSI super-resolution method inspired by the deep Laplacian pyramid network (LPN). First, the spatial resolution is enhanced by an LPN, which can exploit the knowledge from natural images without using any auxiliary observations. The LPN progressively reconstructs the high-spatial-resolution images in a coarse-to-fine fashion by using multiple pyramid levels. Second, spectral characteristics between the low- and high-resolution HSIs are studied by the non-negative dictionary learning (NDL), which is proposed to learn the common dictionary with non-negative constraints. The super-resolution results can finally be obtained by multiplying the learned dictionary and its corresponding sparse codes. Experimental results on three hyperspectral datasets demonstrate the feasibility of the proposed method in enhancing the spatial resolution of the HSI with preserving the spectral information simultaneously. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of the proposed hyperspectral image (HSI) super-resolution method.</p>
Full article ">Figure 2
<p>General architecture of the deep Laplacian pyramid network (LPN).</p>
Full article ">Figure 3
<p>Structure of a recursive block in the feature-embedding sub-network.</p>
Full article ">Figure 4
<p>The high-resolution Red Green Blue (RGB) images of the eight test data from the CAVE dataset. (<b>a</b>) balloons; (<b>b</b>) egyptian<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>statue; (<b>c</b>) face; (<b>d</b>) fake<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>and<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>real<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>lemon<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>slices; (<b>e</b>) fake<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>and<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>real<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>strawberries; (<b>f</b>) oil<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>painting; (<b>g</b>) photo<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>and<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>face and (<b>h</b>) pompoms.</p>
Full article ">Figure 5
<p>The high-resolution RGB images of (<b>a</b>) Indian Pines dataset and (<b>b</b>) Pavia Center dataset.</p>
Full article ">Figure 6
<p>Reconstructed image of the oil<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>painting from the CAVE dataset. (<b>a</b>) original image; (<b>b</b>) the low-resolution image; (<b>c</b>) bicubic; (<b>d</b>) Zeyde; (<b>e</b>) anchored neighborhood regression (ANR), (<b>f</b>) neighbor embedding with least squares (NE + LS), (<b>g</b>) neighbor embedding with non-negative least squares (NE + NNLS), (<b>h</b>) neighbor embedding with locally linear embedding (NE + LLE), (<b>i</b>) A+, (<b>j</b>) super-resolution convolutional neural network (SRCNN); (<b>k</b>) deep Laplacian pyramid network and non-negative dictionary learning (LPN-NDL); (<b>l</b>) coupled non-negative matrix factorization (CNMF); (<b>m</b>) guided filter principal component analysis (GFPCA); (<b>n</b>) Gram-Schmidt spectral sharpening (GS); (<b>o</b>) adaptive GS (GSA) and (<b>p</b>) HySure.</p>
Full article ">Figure 7
<p>Reconstructed image of the Indian Pines dataset. (<b>a</b>) original image; (<b>b</b>) the low-resolution image; (<b>c</b>) bicubic; (<b>d</b>) Zeyde; (<b>e</b>) ANR; (<b>f</b>) NE + LS; (<b>g</b>) NE + NNLS; (<b>h</b>) NE + LLE; (<b>i</b>) A+; (<b>j</b>) SRCNN; (<b>k</b>) LPN-NDL; (<b>l</b>) CNMF; (<b>m</b>) GFPCA; (<b>n</b>) GS; (<b>o</b>) GSA and (<b>p</b>) HySure.</p>
Full article ">Figure 8
<p>Reconstructed image of the Pavia Center dataset. (<b>a</b>) original image; (<b>b</b>) the low-resolution image; (<b>c</b>) bicubic; (<b>d</b>) Zeyde; (<b>e</b>) ANR; (<b>f</b>) NE + LS; (<b>g</b>) NE + NNLS; (<b>h</b>) NE + LLE; (<b>i</b>) A+; (<b>j</b>) SRCNN; (<b>k</b>) LPN-NDL; (<b>l</b>) CNMF; (<b>m</b>) GFPCA; (<b>n</b>) GS; (<b>o</b>) GSA and (<b>p</b>) HySure.</p>
Full article ">Figure 9
<p>Spectral signatures of pixels located at (10,10) in (<b>a</b>) CAVE (oil<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>painting image); (<b>b</b>) CAVE (photo<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>and<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>face); (<b>c</b>) Indian Pines; and (<b>d</b>) Pavia Center datasets.</p>
Full article ">Figure 10
<p>Peak signal-to-noise ratios (PSNRs) of different bands in (<b>a</b>) CAVE (oil<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>painting image); (<b>b</b>) CAVE (photo<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>and<math display="inline"><semantics> <msub> <mrow/> <mo>_</mo> </msub> </semantics></math>face); (<b>c</b>) Indian Pines; and (<b>d</b>) Pavia Center datasets.</p>
Full article ">Figure 11
<p>Results of the Kruskal–Wallis test to compare the proposed method with both single image methods and auxiliary-based methods. (<b>a</b>) box-plot of the Kruskal–Wallis test and (<b>b</b>) graphical presentation of the rank difference between any two methods.</p>
Full article ">Figure 12
<p>Impact of the up-sampling scale factor <span class="html-italic">S</span> on the RMSE for (<b>a</b>) Indian Pines and (<b>b</b>) Pavia Center datasets.</p>
Full article ">Figure 13
<p>Impact of the training epochs on the RMSE for (<b>a</b>) Indian Pines and (<b>b</b>) Pavia Center datasets.</p>
Full article ">Figure 14
<p>Impact of the parameters <math display="inline"><semantics> <mi>λ</mi> </semantics></math> and <math display="inline"><semantics> <mi>β</mi> </semantics></math> on the RMSE for (<b>a</b>) Indian Pines and (<b>b</b>) Pavia Center datasets.</p>
Full article ">
20 pages, 1767 KiB  
Article
Spectral and Spatial Classification of Hyperspectral Images Based on Random Multi-Graphs
by Feng Gao, Qun Wang, Junyu Dong and Qizhi Xu
Remote Sens. 2018, 10(8), 1271; https://doi.org/10.3390/rs10081271 - 12 Aug 2018
Cited by 52 | Viewed by 8798
Abstract
Hyperspectral image classification has been acknowledged as the fundamental and challenging task of hyperspectral data processing. The abundance of spectral and spatial information has provided great opportunities to effectively characterize and identify ground materials. In this paper, we propose a spectral and spatial [...] Read more.
Hyperspectral image classification has been acknowledged as the fundamental and challenging task of hyperspectral data processing. The abundance of spectral and spatial information has provided great opportunities to effectively characterize and identify ground materials. In this paper, we propose a spectral and spatial classification framework for hyperspectral images based on Random Multi-Graphs (RMGs). The RMG is a graph-based ensemble learning method, which is rarely considered in hyperspectral image classification. It is empirically verified that the semi-supervised RMG deals well with small sample setting problems. This kind of problem is very common in hyperspectral image applications. In the proposed method, spatial features are extracted based on linear prediction error analysis and local binary patterns; spatial features and spectral features are then stacked into high dimensional vectors. The high dimensional vectors are fed into the RMG for classification. By randomly selecting a subset of features to create a graph, the proposed method can achieve excellent classification performance. The experiments on three real hyperspectral datasets have demonstrated that the proposed method exhibits better performance than several closely related methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow chart of the proposed SS-RMG method.</p>
Full article ">Figure 2
<p>Implementation of LBP feature extraction.</p>
Full article ">Figure 3
<p>Flow chart of the Random Multi-Graphs algorithm.</p>
Full article ">Figure 4
<p>Indian Pines dataset and corresponding ground truth. (<b>a</b>) False color composite image (R-G-B = band 50-27-17); (<b>b</b>) The ground truth image with 16 land-cover classes.</p>
Full article ">Figure 5
<p>Pavia University dataset and corresponding ground truth. (<b>a</b>) False color composite image (R-G-B = band 10-27-46); (<b>b</b>) The ground truth image with 9 land-cover classes.</p>
Full article ">Figure 6
<p>Baffin Bay dataset and corresponding ground truth. (<b>a</b>) False color composite image; (<b>b</b>) The ground truth image with 4 classes.</p>
Full article ">Figure 7
<p>Influence of graph numbers.</p>
Full article ">Figure 8
<p>Influence of spectral band numbers.</p>
Full article ">Figure 9
<p>Classification performance versus different patch sizes.</p>
Full article ">Figure 10
<p>Classification results by different methods on the Indian Pines dataset. (<b>a</b>) Ground-truth map; (<b>b</b>) EPF-G; (<b>c</b>) IFRF; (<b>d</b>) LBP-ELM; (<b>e</b>) R-VCANet; (<b>f</b>) Proposed SS-RMG.</p>
Full article ">Figure 11
<p>Classification results of different methods on the Baffin Bay dataset. (<b>a</b>) Ground-truth map; (<b>b</b>) EPF-G; (<b>c</b>) IFRF; (<b>d</b>) LBP-ELM; (<b>e</b>) R-VCANet; (<b>f</b>) Proposed SS-RMG.</p>
Full article ">Figure 12
<p>Classification results of different methods on the Pavia University dataset. (<b>a</b>) Ground-truth map; (<b>b</b>) EPF-G; (<b>c</b>) IFRF; (<b>d</b>) LBP-ELM; (<b>e</b>) R-VCANet; (<b>f</b>) Proposed SS-RMG.</p>
Full article ">Figure 13
<p>Influence of training sample number on the <span class="html-italic">Indian Pines</span> dataset.</p>
Full article ">
30941 KiB  
Article
A New Low-Rank Representation Based Hyperspectral Image Denoising Method for Mineral Mapping
by Lianru Gao, Dan Yao, Qingting Li, Lina Zhuang, Bing Zhang and José M. Bioucas-Dias
Remote Sens. 2017, 9(11), 1145; https://doi.org/10.3390/rs9111145 - 8 Nov 2017
Cited by 51 | Viewed by 7176
Abstract
Hyperspectral imaging technology has been used for geological analysis for many years wherein mineral mapping is the dominant application for hyperspectral images (HSIs). The very high spectral resolution of HSIs enables the identification and the diagnosis of different minerals with detection accuracy far [...] Read more.
Hyperspectral imaging technology has been used for geological analysis for many years wherein mineral mapping is the dominant application for hyperspectral images (HSIs). The very high spectral resolution of HSIs enables the identification and the diagnosis of different minerals with detection accuracy far beyond that offered by multispectral images. However, HSIs are inevitably corrupted by noise during acquisition and transmission processes. The presence of noise may significantly degrade the quality of the extracted mineral information. In order to improve the accuracy of mineral mapping, denoising is a crucial pre-processing task. By leveraging on low-rank and self-similarity properties of HSIs, this paper proposes a state-of-the-art HSI denoising algorithm that implements two main steps: (1) signal subspace learning via fine-tuned Robust Principle Component Analysis (RPCA); and (2) denoising the images associated with the representation coefficients, with respect to an orthogonal subspace basis, using BM3D, a self-similarity based state-of-the-art denoising algorithm. Accordingly, the proposed algorithm is named Hyperspectral Denoising via Robust principle component analysis and Self-similarity (HyDRoS), which can be considered as a supervised version of FastHyDe. The effectiveness of HyDRoS is evaluated in a series of mineral mapping experiments using noise-reduced AVIRIS and Hyperion HSIs. In these experiments, the proposed denoiser yielded systematically state-of-the-art performance. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Clean HSI data cube; (<b>b</b>) Noisy HSI data cube; (<b>c</b>) Eigenvalues of the sample correlation matrices computed from the clean and noisy observations shown in (<b>a</b>,<b>b</b>), respectively.</p>
Full article ">Figure 2
<p>Eigen-images of an AVIRIS hyperspectral image at Cuprite Mining District before and after filtering: (<b>a</b>) 2nd eigen-image; (<b>b</b>) 3rd eigen-image; (<b>c</b>) 4th eigen-image; (<b>d</b>) 2nd filtered eigen-image; (<b>e</b>) 3rd filtered eigen-image; (<b>f</b>) 4th filtered eigen-image.</p>
Full article ">Figure 3
<p>AVIRIS hyperspectral data acquired at Cuprite Mining District: (<b>a</b>) false-color image (R: band 183 (2095.0 nm ), G: band 193 (2195.0 nm), and B: band 207 (2334.6 nm); (<b>b</b>) reference mineral map for Alunite, Chalcedony, and Kaolinite of (<b>a</b>); (<b>c</b>) reference spectral signatures for Alunite, Chalcedony, and Kaolinite.</p>
Full article ">Figure 4
<p>Power of signal/noise and performance of HyDRoS with band reduction.</p>
Full article ">Figure 5
<p>PSNR of each band after denoising for different noise variances: (<b>a</b>) <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 0.02; (<b>b</b>) <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 0.04; (<b>c</b>) <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 0.06; (<b>d</b>) <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 0.08; (<b>e</b>) <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 0.1; (<b>f</b>) mean(<math display="inline"> <semantics> <mi>σ</mi> </semantics> </math>) = 0.053.</p>
Full article ">Figure 6
<p>Power of the signal (green) and of the noise (red) and performance of HyDRoS as a function of the subspace dimension.</p>
Full article ">Figure 7
<p>Spectral signatures of the three minerals before and after denoising: (<b>a</b>) pixel located at (257, 629) in the ROI of Alunite; (<b>b</b>) pixel located at (1359, 1071) in the ROI of Chalcedony; (<b>c</b>) pixel located at (157, 503) in the ROI of Kaolinite.</p>
Full article ">Figure 8
<p>ROC curves for the three minerals obtained by SAM: (<b>a</b>) Alunite; (<b>b</b>) Chalcedony; (<b>c</b>) Kaolinite.</p>
Full article ">Figure 9
<p>ROC curves for the three minerals obtained by SFF: (<b>a</b>) Alunite; (<b>b</b>) Chalcedony; (<b>c</b>) Kaolinite.</p>
Full article ">Figure 10
<p>Hyperion hyperspectral data acquired at Cuprite Mining District: (<b>a</b>) false-color image (R: band 194 (2092.8 nm), G: band 204 (2193.7 nm ), and B: band 218 (2335.0 nm)); (<b>b</b>) reference mineral map for Alunite, Chalcedony, and Kaolinite of (<b>a</b>); (<b>c</b>) reference spectral signatures for Alunite, Chalcedony, and Kaolinite.</p>
Full article ">Figure 11
<p>Flowchart of the real data experiment using Hyperion Level 1R data.</p>
Full article ">Figure 12
<p>Denoising results for band 37 (2365.2 nm) of the Hyperion hyperspectral image at Cuprite Mining District: (<b>a</b>) original; (<b>b</b>) BM3D (16 s); (<b>c</b>) BM4D (69 s); (d) PCA + BM4D (58 s); (<b>e</b>) LRMR (21 s); (<b>f</b>) NAILRMA (46 s); (<b>g</b>) FastHyDe(un) (<b>11 s</b>); (<b>h</b>) HyDRoS (34 s).</p>
Full article ">Figure 13
<p>ROC curves for the three minerals obtained by SAM: (<b>a</b>) Alunite; (<b>b</b>) Chalcedony; (<b>c</b>) Kaolinite.</p>
Full article ">Figure 14
<p>ROC curves for the three minerals obtained by SFF: (<b>a</b>) Alunite; (<b>b</b>) Chalcedony; (<b>c</b>) Kaolinite.</p>
Full article ">
7466 KiB  
Article
Retrieval of Biophysical Crop Variables from Multi-Angular Canopy Spectroscopy
by Martin Danner, Katja Berger, Matthias Wocher, Wolfram Mauser and Tobias Hank
Remote Sens. 2017, 9(7), 726; https://doi.org/10.3390/rs9070726 - 14 Jul 2017
Cited by 62 | Viewed by 7524
Abstract
The future German Environmental Mapping and Analysis Program (EnMAP) mission, due to launch in late 2019, will deliver high resolution hyperspectral data from space and will thus contribute to a better monitoring of the dynamic surface of the earth. Exploiting the satellite’s ±30° [...] Read more.
The future German Environmental Mapping and Analysis Program (EnMAP) mission, due to launch in late 2019, will deliver high resolution hyperspectral data from space and will thus contribute to a better monitoring of the dynamic surface of the earth. Exploiting the satellite’s ±30° across-track pointing capabilities will allow for the collection of hyperspectral time-series of homogeneous quality. Various studies have shown the possibility to retrieve geo-biophysical plant variables, like leaf area index (LAI) or leaf chlorophyll content (LCC), from narrowband observations with fixed viewing geometry by inversion of radiative transfer models (RTM). In this study we assess the capability of the well-known PROSPECT 5B + 4SAIL (Scattering by Arbitrarily Inclined Leaves) RTM to estimate these variables from off-nadir observations obtained during a field campaign with respect to EnMAP-like sun–target–sensor-geometries. A novel approach for multiple inquiries of a large look-up-table (LUT) in hierarchical steps is introduced that accounts for the varying instances of all variables of interest. Results show that anisotropic effects are strongest for early growth stages of the winter wheat canopy which influences also the retrieval of the variables. RTM inversions from off-nadir spectra lead to a decreased accuracy for the retrieval of LAI with a relative root mean squared error (rRMSE) of 18% at nadir vs. 25% (backscatter) and 24% (forward scatter) at off-nadir. For LCC estimations, however, off-nadir observations yield improvements, i.e., rRMSE (nadir) = 24% vs. rRMSE (forward scatter) = 20%. It follows that for a variable retrieval through RTM inversion, the final user will benefit from EnMAP time-series for biophysical studies regardless of the acquisition angle and will thus be able to exploit the maximum revisit capability of the mission. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sun–target–sensor-geometry. The three arrows illustrate the three different observer zenith angles (OZA). A positive OZA is associated with backscatter and commonly shows higher rates of reflectance than negative OZAs (forward scatter).</p>
Full article ">Figure 2
<p>Impact of the choice of number of best fits for the retrieval accuracy. The measured winter wheat spectrum was obtained on 10 April 2014, with an Analytical Spectral Devices (ASD) FieldSpec 3 Jr and then converted into pseudo-EnMAP reflectances. The other signatures are the closest 100 members of the LUT, as simulated by the PROSAIL model. The best estimate, i.e., the model run with least distance to the measured spectrum, is drawn in green. With increasing statistical distance the colors fade from green to yellow until the 100th best estimate is finally plotted in red.</p>
Full article ">Figure 3
<p>Red-green-blue (RGB) composite imagery (left) and colored infrared (right) illustration of the spectral image mosaic (standard deviation stretch n = 3.0). Each of the stripes represents the same area of interest under a different observer zenith angle (OZA). OZA = −30° is associated with forward scatter, OZA = 0° with nadir and OZA = +30° with backscatter observations. The stripes are composed of 16 sub-images of 3 × 3 pixels, each representing a different field date (nine in 2014 and seven in 2015), as indicated by the Julian day of year (DOY).</p>
Full article ">Figure 4
<p>Illustration of the Anisotropy Factor (ANIF) for three different phenological stages of winter wheat (early: bright green, medium: dark green, late: yellow) and observation angles: ANIF for forward scatter ((<b>a</b>) ANIF<sub>fs</sub>), backscatter ((<b>b</b>) ANIF<sub>bs</sub>) and the off-nadir ratio ((<b>c</b>) ANIF<sub>fs/bs</sub>).</p>
Full article ">Figure 4 Cont.
<p>Illustration of the Anisotropy Factor (ANIF) for three different phenological stages of winter wheat (early: bright green, medium: dark green, late: yellow) and observation angles: ANIF for forward scatter ((<b>a</b>) ANIF<sub>fs</sub>), backscatter ((<b>b</b>) ANIF<sub>bs</sub>) and the off-nadir ratio ((<b>c</b>) ANIF<sub>fs/bs</sub>).</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>f</b>) Evaluation of best inversion results for LAI (left column) and LCC (right column). Nadir is displayed in the top row, backscatter (OZA = +30°) in the middle, forward scatter (OZA = −30°) in the bottom row. The slope of the regression line is indicated as <span class="html-italic">m</span>.</p>
Full article ">Figure 6
<p>Spatial distribution of measured and estimated LCC (<b>left</b>) and LAI (<b>right</b>) for the two growing seasons of 2014 and 2015 under different observation angles.</p>
Full article ">Figure 7
<p>Visualization of the residuals, i.e., in situ measurements minus parameter estimations. For LCC, purple pixels show an underestimation of the model results, green pixels indicate overestimation. For LAI, green to blue hues show model underestimations and brown pixels model overestimations. Pastel yellow shades indicate a good model agreement with in situ observations.</p>
Full article ">Figure 8
<p>Canopy Chlorophyll Content, as a multiplication of LAI and LCC, combines the performance of the two underlying parameters. Results are shown for nadir (<b>a</b>), backscatter (<b>b</b>) and forward scatter (<b>c</b>) observations. The slope of the regression line is indicated as <span class="html-italic">m</span>.</p>
Full article ">
Back to TopTop