[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

Hyperspectral Imaging (HSI) Sensing and Analysis

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (15 October 2020) | Viewed by 51844

Special Issue Editor


E-Mail Website
Guest Editor
Institut d'Électronique et des Technologies du numéRique, 35700 Rennes, France
Interests: multimodal remote sensing data analysis and processing; machine and deep learning; image registration; adaptive multichannel signal and image processing; blind image restoration and blind estimation of image noise characteristics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The continually-improving advances of hyperspectral image capture technologies at increasingly affordable costs and the ever-increasing use of hyperspectral data in both cross-disciplinary commercial and scientific fields push us to further improve our analysis and processing capabilities of the whole acquisition process accordingly. The main goal is to provide the end-users with flexible, easy-to-use, and smart sensing systems suitable for a matured operational processing flow based on high-precision standard surface reflectance products in terms of recovering quality (imaging spectrometry), control, and telemetry.

The aim of this Special Issue is thus to focus on and to compile recent and latest advances related to Hyperspectral Imaging Sensing and Analysis. All contributions to such hyperspectral sensing systems offering timely high-quality observational capabilities for a better sensing that meet the end-users’ requirements and expectations for interdisciplinary applications are obviously targeted.

This includes, of course, all processing stages ranging from acquisition to advanced processing of georeferenced data, covering latest scientific, technological, and algorithmic progresses that make it possible to take better advantage of the data sensed in either a standalone or cooperative mode.

A broad spectrum of recent and emerging applications illustrating the practical deployment of systems based on hyperspectral sensing and analysis is fully expected.

Dr. Benoit Vozel
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • (Standalone/cooperative) imaging sensors and platforms
  • Calibration
  • Radiometric, atmospheric, and geometric corrections
  • Georeferencing
  • Compression
  • Filtering
  • Restoration
  • Unmixing
  • Target detection
  • Anomaly detection
  • Data classification
  • Data fusion
  • Bio- and geo- physical variables retrieval

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 13766 KiB  
Article
A Full-Spectrum Registration Method for Zhuhai-1 Satellite Hyperspectral Imagery
by Jinjun Meng, Jiaqi Wu, Linlin Lu, Qingting Li, Qiang Zhang, Suyun Feng and Jun Yan
Sensors 2020, 20(21), 6298; https://doi.org/10.3390/s20216298 - 5 Nov 2020
Cited by 6 | Viewed by 3193
Abstract
Accurate registration is an essential prerequisite for analysis and applications involving remote sensing imagery. It is usually difficult to extract enough matching points for inter-band registration in hyperspectral imagery due to the different spectral responses for land features in different image bands. This [...] Read more.
Accurate registration is an essential prerequisite for analysis and applications involving remote sensing imagery. It is usually difficult to extract enough matching points for inter-band registration in hyperspectral imagery due to the different spectral responses for land features in different image bands. This is especially true for non-adjacent bands. The inconsistency in geometric distortion caused by topographic relief also makes it inappropriate to use a single affine transformation relationship for the geometric transformation of the entire image. Currently, accurate registration between spectral bands of Zhuhai-1 satellite hyperspectral imagery remains challenging. In this paper, a full-spectrum registration method was proposed to address this problem. The method combines the transfer strategy based on the affine transformation relationship between adjacent spectrums with the differential correction from dense Delaunay triangulation. Firstly, the scale-invariant feature transform (SIFT) extraction method was used to extract and match feature points of adjacent bands. The RANdom SAmple Consensus (RANSAC) algorithm and the least square method is then used to eliminate mismatching point pairs to obtain fine matching point pairs. Secondly, a dense Delaunay triangulation was constructed based on fine matching point pairs. The affine transformation relation for non-adjacent bands was established for each triangle using the affine transformation relation transfer strategy. Finally, the affine transformation relation was used to perform differential correction for each triangle. Three Zhuhai-1 satellite hyperspectral images covering different terrains were used as experiment data. The evaluation results showed that the adjacent band registration accuracy ranged from 0.2 to 0.6 pixels. The structural similarity measure and cosine similarity measure between non-adjacent bands were both greater than 0.80. Moreover, the full-spectrum registration accuracy was less than 1 pixel. These registration results can meet the needs of Zhuhai-1 hyperspectral imagery applications in various fields. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Focal plane arrangement for the hyperspectral camera.</p>
Full article ">Figure 2
<p>CMOS sensor spectrum distribution.</p>
Full article ">Figure 3
<p>Spectral response functions of OHS hyperspectral bands.</p>
Full article ">Figure 4
<p>Full-spectrum registration workflow.</p>
Full article ">Figure 5
<p>Schematic diagram of the adjustment for matching points. (<b>a</b>) Reference band. (<b>b</b>) Band to be registered.</p>
Full article ">Figure 6
<p>Delaunay triangulation. (<b>a</b>) Reference band (subset). (<b>b</b>) Band to be registered (subset).</p>
Full article ">Figure 7
<p>Full-spectrum registration diagram.</p>
Full article ">Figure 8
<p>Schematic diagram of matching points outside Delaunay triangulation.</p>
Full article ">Figure 9
<p>Local comparison of registration results between different bands. (<b>a</b>) OHS–Arizona, USA; (<b>b</b>) OHS–Guangxi, China; (<b>c</b>) OHS–Xinjiang, China.</p>
Full article ">Figure 9 Cont.
<p>Local comparison of registration results between different bands. (<b>a</b>) OHS–Arizona, USA; (<b>b</b>) OHS–Guangxi, China; (<b>c</b>) OHS–Xinjiang, China.</p>
Full article ">Figure 10
<p>True and false-color composite images after registration. Left column: true color (R: B14, G: B08, B: B03), Right column: false-color (R: B28, G: B08, B: B03). (<b>a</b>) OHS–Arizona, USA; (<b>b</b>) OHS–Guangxi, China; (<b>c</b>) OHS–Xinjiang, China.</p>
Full article ">Figure 10 Cont.
<p>True and false-color composite images after registration. Left column: true color (R: B14, G: B08, B: B03), Right column: false-color (R: B28, G: B08, B: B03). (<b>a</b>) OHS–Arizona, USA; (<b>b</b>) OHS–Guangxi, China; (<b>c</b>) OHS–Xinjiang, China.</p>
Full article ">Figure 11
<p>Comparison of registration results using IOEM and the proposed method. (<b>a</b>) IOEM method; (<b>b</b>) Proposed method; (<b>c</b>) IOEM method; (<b>d</b>) Proposed method.</p>
Full article ">Figure 11 Cont.
<p>Comparison of registration results using IOEM and the proposed method. (<b>a</b>) IOEM method; (<b>b</b>) Proposed method; (<b>c</b>) IOEM method; (<b>d</b>) Proposed method.</p>
Full article ">Figure 12
<p>Registration accuracy of adjacent bands using different methods.</p>
Full article ">Figure 13
<p>Similarity measures for the registered spectral bands. (<b>a</b>) OHS–Arizona, USA; (<b>b</b>) OHS–Guangxi, China; (<b>c</b>) OHS–Xinjiang, China.</p>
Full article ">Figure 13 Cont.
<p>Similarity measures for the registered spectral bands. (<b>a</b>) OHS–Arizona, USA; (<b>b</b>) OHS–Guangxi, China; (<b>c</b>) OHS–Xinjiang, China.</p>
Full article ">
19 pages, 8437 KiB  
Article
Spatial–Spectral Feature Refinement for Hyperspectral Image Classification Based on Attention-Dense 3D-2D-CNN
by Jin Zhang, Fengyuan Wei, Fan Feng and Chunyang Wang
Sensors 2020, 20(18), 5191; https://doi.org/10.3390/s20185191 - 11 Sep 2020
Cited by 36 | Viewed by 5072
Abstract
Convolutional neural networks provide an ideal solution for hyperspectral image (HSI) classification. However, the classification effect is not satisfactory when limited training samples are available. Focused on “small sample” hyperspectral classification, we proposed a novel 3D-2D-convolutional neural network (CNN) model named AD-HybridSN (Attention-Dense-HybridSN). [...] Read more.
Convolutional neural networks provide an ideal solution for hyperspectral image (HSI) classification. However, the classification effect is not satisfactory when limited training samples are available. Focused on “small sample” hyperspectral classification, we proposed a novel 3D-2D-convolutional neural network (CNN) model named AD-HybridSN (Attention-Dense-HybridSN). In our proposed model, a dense block was used to reuse shallow features and aimed at better exploiting hierarchical spatial–spectral features. Subsequent depth separable convolutional layers were used to discriminate the spatial information. Further refinement of spatial–spectral features was realized by the channel attention method and spatial attention method, which were performed behind every 3D convolutional layer and every 2D convolutional layer, respectively. Experiment results indicate that our proposed model can learn more discriminative spatial–spectral features using very few training data. In Indian Pines, Salinas and the University of Pavia, AD-HybridSN obtain 97.02%, 99.59% and 98.32% overall accuracy using only 5%, 1% and 1% labeled data for training, respectively, which are far better than all the contrast models. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Illustration of the proposed Attention-Dense-HybridSN (AD-HybridSN).</p>
Full article ">Figure 2
<p>The overall architecture of the channel attention mechanism used in AD-HybridSN. The dimension of the input feature map is <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>×</mo> <mi>W</mi> <mo>×</mo> <mi>C</mi> <mo>×</mo> <mi>N</mi> </mrow> </semantics></math> (batchsize is not shown) and after the reshaping operation, the dimension of the new feature map will be <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>×</mo> <mi>W</mi> <mo>×</mo> <mrow> <mo>(</mo> <mrow> <mi>C</mi> <mi>N</mi> </mrow> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </semantics></math> After global pooling operation and two fully connected layers, specific weights are generated for every channel and the weight vector will be multiplied with the reshaped 3D tensor to complete feature refinement. After feature refinement, the 3D tensor will be reshaped to a 4D tensor, of which the dimension is <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>×</mo> <mi>W</mi> <mo>×</mo> <mi>C</mi> <mo>×</mo> <mi>N</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The overall architecture of the spatial mechanism used in AD-HybridSN. The dimension of the input feature map is <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>×</mo> <mi>W</mi> <mo>×</mo> <mi>C</mi> </mrow> </semantics></math> (batchsize is not shown). After max pooling, average pooling and concatenation operation, the dimension of the new feature map will be <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> <mo>×</mo> <mn>2</mn> <mi>C</mi> </mrow> </semantics></math>. A convolutional layer that has only one kernel with a sigmoid activation function is used to learn where more attention is needed. The obtained weights are multiplied with the input feature map to complete spatial feature refinement.</p>
Full article ">Figure 4
<p>False-color image and color coding for Indian Pines.</p>
Full article ">Figure 5
<p>False-color image and color coding for Salinas.</p>
Full article ">Figure 6
<p>False-color image and color coding for the University of Pavia.</p>
Full article ">Figure 7
<p>The classification maps of Indian Pines. (<b>a</b>) Ground truth. (<b>b</b>–<b>f</b>) Predicted classification maps for Res-2D-CNN, Res-3D-CNN, HybridSN, R-HybridSN and AD-HybridSN respectively.</p>
Full article ">Figure 8
<p>The classification maps of Salinas. (<b>a</b>) Ground truth. (<b>b</b>–<b>f</b>) Predicted classification maps for Res-2D-CNN, Res-3D-CNN, HybridSN, R-HybridSN and AD-HybridSN respectively.</p>
Full article ">Figure 9
<p>The classification maps of the University of Pavia. (<b>a</b>) Ground truth. (<b>b</b>–<b>f</b>) Predicted classification maps for Res-2D-CNN, Res-3D-CNN, HybridSN, R-HybridSN and AD-HybridSN respectively.</p>
Full article ">
14 pages, 3042 KiB  
Article
Application of Convolutional Neural Network-Based Feature Extraction and Data Fusion for Geographical Origin Identification of Radix Astragali by Visible/Short-Wave Near-Infrared and Near Infrared Hyperspectral Imaging
by Qinlin Xiao, Xiulin Bai, Pan Gao and Yong He
Sensors 2020, 20(17), 4940; https://doi.org/10.3390/s20174940 - 1 Sep 2020
Cited by 36 | Viewed by 3530
Abstract
Radix Astragali is a prized traditional Chinese functional food that is used for both medicine and food purposes, with various benefits such as immunomodulation, anti-tumor, and anti-oxidation. The geographical origin of Radix Astragali has a significant impact on its quality attributes. Determining the [...] Read more.
Radix Astragali is a prized traditional Chinese functional food that is used for both medicine and food purposes, with various benefits such as immunomodulation, anti-tumor, and anti-oxidation. The geographical origin of Radix Astragali has a significant impact on its quality attributes. Determining the geographical origins of Radix Astragali is essential for quality evaluation. Hyperspectral imaging covering the visible/short-wave near-infrared range (Vis-NIR, 380–1030 nm) and near-infrared range (NIR, 874–1734 nm) were applied to identify Radix Astragali from five different geographical origins. Principal component analysis (PCA) was utilized to form score images to achieve preliminary qualitative identification. PCA and convolutional neural network (CNN) were used for feature extraction. Measurement-level fusion and feature-level fusion were performed on the original spectra at different spectral ranges and the corresponding features. Support vector machine (SVM), logistic regression (LR), and CNN models based on full wavelengths, extracted features, and fusion datasets were established with excellent results; all the models obtained an accuracy of over 98% for different datasets. The results illustrate that hyperspectral imaging combined with CNN and fusion strategy could be an effective method for origin identification of Radix Astragali. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>RGB images of Radix Astragali from Gansu province, Heilongjiang province, the Inner Mongolia autonomous region, Shanxi province, and the Xinjiang Uygur autonomous region.</p>
Full article ">Figure 2
<p>The structure of the convolutional neural network (CNN) and the flowchart of deep spectral features used in the support vector machine (SVM)/logistic regression (LR)/CNN classifier.</p>
Full article ">Figure 3
<p>The average spectra with standard deviation (SD) of Radix Astragali from five origins in the range of (<b>a</b>) 441–947 nm (the standard deviation at 657 and 900 nm are shown); (<b>b</b>) 975–1646 nm (the standard deviation at 1116, 1207, 1311 and 1494 nm are shown).</p>
Full article ">Figure 4
<p>The flow chart of the analysis process in this study.</p>
Full article ">Figure 5
<p>Principal component analysis (PCA) score images of the first six PCs for Radix Astragali from five geographical origins. (Left side: PCA score images of hyperspectral images in the Vis-NIR region; Right side: PCA score images of hyperspectral images in the NIR region).</p>
Full article ">Figure 6
<p>Prediction map for Radix Astragali of all the geographical origins based on the CNN model using full wavelengths: (<b>a</b>) prediction map using Vis-NIR spectra; and (<b>b</b>) prediction map using NIR spectra.</p>
Full article ">
21 pages, 7715 KiB  
Article
Restoration and Calibration of Tilting Hyperspectral Super-Resolution Image
by Xizhen Zhang, Aiwu Zhang, Mengnan Li, Lulu Liu and Xiaoyan Kang
Sensors 2020, 20(16), 4589; https://doi.org/10.3390/s20164589 - 15 Aug 2020
Cited by 3 | Viewed by 2411
Abstract
Tilting sampling is a novel sampling mode for achieving a higher resolution of hyperspectral imagery. However, most studies on the tilting image have only focused on a single band, which loses the features of hyperspectral imagery. This study focuses on the restoration of [...] Read more.
Tilting sampling is a novel sampling mode for achieving a higher resolution of hyperspectral imagery. However, most studies on the tilting image have only focused on a single band, which loses the features of hyperspectral imagery. This study focuses on the restoration of tilting hyperspectral imagery and the practicality of its results. First, we reduced the huge data of tilting hyperspectral imagery by the p-value sparse matrix band selection method (pSMBS). Then, we restored the reduced imagery by optimal reciprocal cell combined modulation transfer function (MTF) method. Next, we built the relationship between the restored tilting image and the original normal image. We employed the least square method to solve the calibration equation for each band. Finally, the calibrated tilting image and original normal image were both classified by the unsupervised classification method (K-means) to confirm the practicality of calibrated tilting images in remote sensing applications. The results of classification demonstrate the optimal reciprocal cell combined MTF method can effectively restore the tilting image and the calibrated tiling image can be used in remote sensing applications. The restored and calibrated tilting image has a higher resolution and better spectral fidelity. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Tilting sampling and normal sampling.</p>
Full article ">Figure 2
<p>(<b>a</b>) Imaging principle of the sensor we used; (<b>b</b>) sampling diagram of the sensor we used.</p>
Full article ">Figure 3
<p>(<b>a</b>) Original tilting image; (<b>b</b>) <span class="html-italic">p</span>-value distribution map.</p>
Full article ">Figure 4
<p>Flow chart of the optimal reciprocal cell combined the modulation transfer function (MTF) method.</p>
Full article ">Figure 5
<p>Pseudocolor tilting image. (<b>a</b>) Original tilting image; (<b>b</b>) restored tilting image.</p>
Full article ">Figure 6
<p>Details of the original tilting image and the restored tilting image. (<b>a</b>) Details of the original tilting image; (<b>b</b>) details of the restored tilting image.</p>
Full article ">Figure 7
<p>Image acquired by normal or tilting sampling. (<b>a</b>) Normal image; (<b>b</b>) tilting image; (<b>c</b>) tilting image after geometric correction.</p>
Full article ">Figure 8
<p>Pseudocolor image of interest region. (<b>a</b>) Normal image of interest region 1; (<b>b</b>) normal image of interest region 2; (<b>c</b>) tilting image of interest region 1; (<b>d</b>) tilting image of interest region 2.</p>
Full article ">Figure 9
<p>Restored pseudocolor image of the interesting area. (<b>a</b>) Restored normal image of interesting area 1; (<b>b</b>) restored normal image of interesting area 2; (<b>c</b>) restored tilting image of interesting area 1; (<b>d</b>) restored tilting image of interesting area 2.</p>
Full article ">Figure 10
<p>DN Mean-value curve of the same object area both of normal image and restored tilting the image. (<b>a</b>) Mean-value curve of corresponding area 1; (<b>b</b>) mean-value curve of corresponding area 2; (<b>c</b>) mean-value curve of corresponding area 3.</p>
Full article ">Figure 11
<p>Flow chart of tilting image calibration.</p>
Full article ">Figure 12
<p>Calibration equation fitting of each restored tilting image band.</p>
Full article ">Figure 13
<p>Pseudocolor image of tilting image and normal image. (<b>a</b>) Pseudocolor original normal image; (<b>b</b>) pseudocolor original tilting image; (<b>c</b>) pseudocolor calibrated tilting image.</p>
Full article ">Figure 14
<p>Comparisons maps of mean-value between the calibrated tilting image and the original normal image.</p>
Full article ">Figure 15
<p>Classification results of original normal images and restored tilting images. (<b>a</b>) Results of the original normal image; (<b>b</b>) results of calibrated tilting image.</p>
Full article ">
13 pages, 3402 KiB  
Article
Low-Cost Hyperspectral Imaging System: Design and Testing for Laboratory-Based Environmental Applications
by Mary B. Stuart, Leigh R. Stanger, Matthew J. Hobbs, Tom D. Pering, Daniel Thio, Andrew J.S. McGonigle and Jon R. Willmott
Sensors 2020, 20(11), 3293; https://doi.org/10.3390/s20113293 - 9 Jun 2020
Cited by 35 | Viewed by 8690
Abstract
The recent surge in the development of low-cost, miniaturised technologies provides a significant opportunity to develop miniaturised hyperspectral imagers at a fraction of the cost of currently available commercial set-ups. This article introduces a low-cost laboratory-based hyperspectral imager developed using commercially available components. [...] Read more.
The recent surge in the development of low-cost, miniaturised technologies provides a significant opportunity to develop miniaturised hyperspectral imagers at a fraction of the cost of currently available commercial set-ups. This article introduces a low-cost laboratory-based hyperspectral imager developed using commercially available components. The imager is capable of quantitative and qualitative hyperspectral measurements, and it was tested in a variety of laboratory-based environmental applications where it demonstrated its ability to collect data that correlates well with existing datasets. In its current format, the imager is an accurate laboratory measurement tool, with significant potential for ongoing future developments. It represents an initial development in accessible hyperspectral technologies, providing a robust basis for future improvements. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Components of the hyperspectral set-up.</p>
Full article ">Figure 2
<p>Schematic diagram of the low-cost hyperspectral imager: (<b>a</b>) and (<b>b</b>) comprise the rotary mirror system and the miniature spectrometer respectively, illustrating the main components of each device. Inset (<b>a<sup>1</sup></b>) displays the rotational axis for each mirror. Note, beam steering of the field of view (FOV) is provided by the mirrors. The image distance is ca. 66.6 mm. This is distributed as follows: ca. 30 mm lens to mirrors, ca. 4.6 mm between mirrors, and ca. 32 mm mirrors to spectrometer. Not to scale.</p>
Full article ">Figure 3
<p>Low-cost integrating sphere set-up for image capture. Schematic not to scale.</p>
Full article ">Figure 4
<p>The current set-up of the hyperspectral imager: (<b>A</b>) displays the true set-up during image capture, with the hyperspectral imager covered by a dark box, while (<b>B</b>) displays the alignment between the sphere and the hyperspectral imager with the dark box removed.</p>
Full article ">Figure 5
<p>Spectral reflectance of a healthy apple measured over a five-day period, highlighting the changes in pigments that occur during the ripening process. Note the absorption features present at ca. 550 nm, ca. 650 nm, and ca. 675 nm. Scaled reflectance is representative of <span class="html-italic">R<sub>Target</sub></span> in Equation (1).</p>
Full article ">Figure 6
<p>Bruise development over the measurement period: comparison between colour and hyperspectral datasets captured with a 128 × 128 pixel scene at 15 ms exposure per pixel. Note the varying levels of detection at different wavelengths.</p>
Full article ">Figure 7
<p>Example image captured using the low-cost hyperspectral imager displaying the presence of flow banding. Hyperspectral image taken from 613 nm of a 256 × 256 pixel scan.</p>
Full article ">Figure 8
<p>Observed spectral reflectance for the sulphur sample (<b>right</b>); note the significant increase in reflectance observed from ca. 500 nm in the spectral data (<b>left</b>). Scaled reflectance is representative of <span class="html-italic">R<sub>Target</sub></span> in Equation (1).</p>
Full article ">Figure 9
<p>Variations in reflectance across the hyperspectral data for the sulphur target. Images taken from a 128 × 128 pixel scan.</p>
Full article ">Figure 10
<p>Spectral response across the visible spectrum for three dental shade tabs of varying shades. Scaled reflectance is representative of <span class="html-italic">R<sub>Target</sub></span> in Equation (1).</p>
Full article ">
17 pages, 5600 KiB  
Article
Distributed Compressed Hyperspectral Sensing Imaging Based on Spectral Unmixing
by Zhongliang Wang and Hua Xiao
Sensors 2020, 20(8), 2305; https://doi.org/10.3390/s20082305 - 17 Apr 2020
Cited by 8 | Viewed by 2915
Abstract
The huge volume of hyperspectral imagery demands enormous computational resources, storage memory, and bandwidth between the sensor and the ground stations. Compressed sensing theory has great potential to reduce the enormous cost of hyperspectral imagery by only collecting a few compressed measurements on [...] Read more.
The huge volume of hyperspectral imagery demands enormous computational resources, storage memory, and bandwidth between the sensor and the ground stations. Compressed sensing theory has great potential to reduce the enormous cost of hyperspectral imagery by only collecting a few compressed measurements on the onboard imaging system. Inspired by distributed source coding, in this paper, a distributed compressed sensing framework of hyperspectral imagery is proposed. Similar to distributed compressed video sensing, spatial-spectral hyperspectral imagery is separated into key-band and compressed-sensing-band with different sampling rates during collecting data of proposed framework. However, unlike distributed compressed video sensing using side information for reconstruction, the widely used spectral unmixing method is employed for the recovery of hyperspectral imagery. First, endmembers are extracted from the compressed-sensing-band. Then, the endmembers of the key-band are predicted by interpolation method and abundance estimation is achieved by exploiting sparse penalty. Finally, the original hyperspectral imagery is recovered by linear mixing model. Extensive experimental results on multiple real hyperspectral datasets demonstrate that the proposed method can effectively recover the original data. The reconstruction peak signal-to-noise ratio of the proposed framework surpasses other state-of-the-art methods. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Framework of distributed compressed hyperspectral sensing (DCHS).</p>
Full article ">Figure 2
<p>The performance of different interpolation methods.</p>
Full article ">Figure 3
<p>Sensitivity analysis of the regularization parameters <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>2</mn> </msub> </mrow> </semantics></math> of the proposed DCHS algorithm with different number of bands in each group <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>g</mi> </msub> </mrow> </semantics></math> or different sampling rates <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </semantics></math>. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>R</mi> <mo>=</mo> <mn>0.0564</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>R</mi> <mo>=</mo> <mn>0.1048</mn> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>R</mi> <mo>=</mo> <mn>0.2048</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Sensitivity analysis of the parameter <math display="inline"><semantics> <mi>μ</mi> </semantics></math> of the proposed DCHS algorithm with different number of bands in each group <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>g</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Mean peak signal-to-noise ratio (MPSNR) curves of different algorithms for different datasets: (<b>a</b>) Cuprite dataset, (<b>b</b>) Urban dataset, and (<b>c</b>) PaviaU dataset.</p>
Full article ">Figure 6
<p>Original and reconstructed pseudocolor images achieved by different algorithms on different datasets near the 0.05 sampling rate, from top to bottom Cupriete, Urban, and PaviaU: (<b>a</b>) original, (<b>b</b>) MT-BCS, (<b>c</b>) CPPCA, (<b>d</b>) SSHCS, (<b>e</b>) SpeCA, (<b>f</b>) SSCR_SU, and (<b>g</b>) DCHS.</p>
Full article ">Figure 7
<p>The 28th band residual images of different algorithms on different datasets near the 0.05 sampling rate, from top to bottom Cupriete, Urban, and PaviaU: (<b>a</b>) SSHCS, (<b>b</b>) SpeCA, (<b>c</b>) SSCR_SU, (<b>d</b>) DCHS.</p>
Full article ">Figure 8
<p>Spectral curves of original and reconstructed achieved by different algorithms. (<b>a</b>) Cuprite dataset, (<b>b</b>)Urban dataset, (<b>c</b>) PaviaU dataset.</p>
Full article ">
29 pages, 7479 KiB  
Article
Three-Dimensional ResNeXt Network Using Feature Fusion and Label Smoothing for Hyperspectral Image Classification
by Peida Wu, Ziguan Cui, Zongliang Gan and Feng Liu
Sensors 2020, 20(6), 1652; https://doi.org/10.3390/s20061652 - 16 Mar 2020
Cited by 26 | Viewed by 4695
Abstract
In recent years, deep learning methods have been widely used in the hyperspectral image (HSI) classification tasks. Among them, spectral-spatial combined methods based on the three-dimensional (3-D) convolution have shown good performance. However, because of the three-dimensional convolution, increasing network depth will result [...] Read more.
In recent years, deep learning methods have been widely used in the hyperspectral image (HSI) classification tasks. Among them, spectral-spatial combined methods based on the three-dimensional (3-D) convolution have shown good performance. However, because of the three-dimensional convolution, increasing network depth will result in a dramatic rise in the number of parameters. In addition, the previous methods do not make full use of spectral information. They mostly use the data after dimensionality reduction directly as the input of networks, which result in poor classification ability in some categories with small numbers of samples. To address the above two issues, in this paper, we designed an end-to-end 3D-ResNeXt network which adopts feature fusion and label smoothing strategy further. On the one hand, the residual connections and split-transform-merge strategy can alleviate the declining-accuracy phenomenon and decrease the number of parameters. We can adjust the hyperparameter cardinality instead of the network depth to extract more discriminative features of HSIs and improve the classification accuracy. On the other hand, in order to improve the classification accuracies of classes with small numbers of samples, we enrich the input of the 3D-ResNeXt spectral-spatial feature learning network by additional spectral feature learning, and finally use a loss function modified by label smoothing strategy to solve the imbalance of classes. The experimental results on three popular HSI datasets demonstrate the superiority of our proposed network and an effective improvement in the accuracies especially for the classes with small numbers of training samples. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Overall structure of the proposed Hyperspectral Image (HSI) classification framework.</p>
Full article ">Figure 2
<p>The end-to-end HSI classification flowchart.</p>
Full article ">Figure 3
<p>Three-dimensional spectral residual block to extract spectral features.</p>
Full article ">Figure 4
<p>A block of ResNet (Left) and ResNeXt with cardinality = 8 (Right). A layer is shown as (# in channels, filter size, # out channels).</p>
Full article ">Figure 5
<p>General structure of a ResNeXt block with cardinality = 8 (Taking the Block2_1 for example).</p>
Full article ">Figure 6
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples under different training ratios in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p>
Full article ">Figure 7
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples under different input spatial size in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p>
Full article ">Figure 8
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples under different cardinality in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p>
Full article ">Figure 9
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples of four models using 3-D Convolutional Neural Network (CNN) in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p>
Full article ">Figure 10
<p>Classification results of the models in comparison for the IN dataset. (<b>a</b>) False color image, (<b>b</b>) Ground-truth labels, (<b>c</b>)–(<b>f</b>) Classification results of 3D-CNN, SSRN, 3D-ResNet, and 3D-ResNeXt.</p>
Full article ">Figure 11
<p>Classification results of the models in comparison for the UP dataset. (<b>a</b>) False color image, (<b>b</b>) Ground-truth labels, (<b>c</b>)–(<b>f</b>) Classification results of 3D-CNN, SSRN, 3D-ResNet, and 3D-ResNeXt.</p>
Full article ">Figure 12
<p>Classification results of the models in comparison for the KSC dataset. (<b>a</b>) False color image, (<b>b</b>) Ground-truth labels, (<b>c</b>)–(<b>f</b>) Classification results of 3D-CNN, SSRN, 3D-ResNet, and 3D-ResNeXt.</p>
Full article ">Figure 13
<p>The OA and loss of models with different loss functions for the IN dataset, (<b>a</b>) the original cross-entropy loss function, (<b>b</b>) the cross-entropy loss function modified by label smoothing strategy.</p>
Full article ">Figure 14
<p>The OA and loss of models with different loss functions for the UP dataset, (<b>a</b>) the original cross-entropy loss function, (<b>b</b>) the cross-entropy loss function modified by label smoothing strategy.</p>
Full article ">Figure 15
<p>The OA and loss of models with different loss functions for the KSC dataset, (<b>a</b>) the original cross-entropy loss function, (<b>b</b>) the cross-entropy loss function modified by label smoothing strategy.</p>
Full article ">Figure 16
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples with different loss functions in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p>
Full article ">Figure 17
<p>OAs of the 3D-ResNet and 3D-ResNeXt with different ratios of training samples for the IN dataset.</p>
Full article ">Figure 18
<p>OAs of the 3D-ResNet and 3D-ResNeXt with different ratios of training samples for the UP dataset.</p>
Full article ">Figure 19
<p>OAs of the 3D-ResNet and 3D-ResNeXt with different ratios of training samples for the KSC dataset.</p>
Full article ">
15 pages, 4825 KiB  
Article
Classification of Granite Soils and Prediction of Soil Water Content Using Hyperspectral Visible and Near-Infrared Imaging
by Hwan-Hui Lim, Enok Cheon, Deuk-Hwan Lee, Jun-Seo Jeon and Seung-Rae Lee
Sensors 2020, 20(6), 1611; https://doi.org/10.3390/s20061611 - 13 Mar 2020
Cited by 9 | Viewed by 4241
Abstract
Soil water content is one of the most important physical indicators of landslide hazards. Therefore, quickly and non-destructively classifying soils and determining or predicting water content are essential tasks for the detection of landslide hazards. We investigated hyperspectral information in the visible and [...] Read more.
Soil water content is one of the most important physical indicators of landslide hazards. Therefore, quickly and non-destructively classifying soils and determining or predicting water content are essential tasks for the detection of landslide hazards. We investigated hyperspectral information in the visible and near-infrared regions (400–1000 nm) of 162 granite soil samples collected from Seoul (Republic of Korea). First, effective wavelengths were extracted from pre-processed spectral data using the successive projection algorithm to develop a classification model. A gray-level co-occurrence matrix was employed to extract textural variables, and a support vector machine was used to establish calibration models and the prediction model. The results show that an optimal correct classification rate of 89.8% could be achieved by combining data sets of effective wavelengths and texture features for modeling. Using the developed classification model, an artificial neural network (ANN) model for the prediction of soil water content was constructed. The input parameter was composed of Munsell soil color, area of reflectance (near-infrared), and dry unit weight. The accuracy in water content prediction of the developed ANN model was verified by a coefficient of determination and mean absolute percentage error of 0.91 and 10.1%, respectively. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Location map showing the corresponding study area sample locations.</p>
Full article ">Figure 2
<p>Red, green, and blue (RGB) images of (<b>a</b>) Brown, (<b>b</b>) Yellow, and (<b>c</b>) Red soils after No. 40 sieve analysis.</p>
Full article ">Figure 3
<p>Hyperspectral camera system: (<b>a</b>) hyperspectral camera in the laboratory and (<b>b</b>) schematic diagram of the hyperspectral camera.</p>
Full article ">Figure 4
<p>Soil image showing the “region of interest (ROI)” representing the average selected pixels: (<b>a</b>) hyperspectral image (specimen from Mt. Guryong), (<b>b</b>) remaining image, and (<b>c</b>) the region of interest in the granite soil.</p>
Full article ">Figure 5
<p>Overall flowchart showing the sequence of steps involved in the procedure of soil classification and the artificial neural network (ANN) for the prediction of soil water content.</p>
Full article ">Figure 6
<p>Selection of effective wavelengths using the successive projections algorithm: (<b>a</b>) Number of selected variables; (<b>b</b>) Effective wavelengths in visible and near-infrared regions.</p>
Full article ">Figure 7
<p>Correlation coefficients of training and testing sets: (<b>a</b>) log-sigmoid and linear functions, (<b>b</b>) log-sigmoid and log-sigmoid functions, (<b>c</b>) log-sigmoid and tan-sigmoid functions, (<b>d</b>) tan-sigmoid and linear functions, (<b>e</b>) tan-sigmoid and log-sigmoid functions, and (<b>f</b>) tan-sigmoid and tan-sigmoid functions.</p>
Full article ">Figure 8
<p>Structure of the artificial neural network (ANN) model for the estimation of soil water content.</p>
Full article ">Figure 9
<p>Comparison of measured water content versus water content predicted using the ANN model.</p>
Full article ">Figure 10
<p>ANN convergence performance for training and testing steps.</p>
Full article ">
12 pages, 2171 KiB  
Article
Development of a Low-Cost Narrow Band Multispectral Imaging System Coupled with Chemometric Analysis for Rapid Detection of Rice False Smut in Rice Seed
by Haiyong Weng, Ya Tian, Na Wu, Xiaoling Li, Biyun Yang, Yiping Huang, Dapeng Ye and Renye Wu
Sensors 2020, 20(4), 1209; https://doi.org/10.3390/s20041209 - 22 Feb 2020
Cited by 15 | Viewed by 3783
Abstract
Spectral imaging is a promising technique for detecting the quality of rice seeds. However, the high cost of the system has limited it to more practical applications. The study was aimed to develop a low-cost narrow band multispectral imaging system for detecting rice [...] Read more.
Spectral imaging is a promising technique for detecting the quality of rice seeds. However, the high cost of the system has limited it to more practical applications. The study was aimed to develop a low-cost narrow band multispectral imaging system for detecting rice false smut (RFS) in rice seeds. Two different cultivars of rice seeds were artificially inoculated with RFS. Results have demonstrated that spectral features at 460, 520, 660, 740, 850, and 940 nm were well linked to the RFS. It achieved an overall accuracy of 98.7% with a false negative rate of 3.2% for Zheliang, and 91.4% with 6.7% for Xiushui, respectively, using the least squares-support vector machine. Moreover, the robustness of the model was validated through transferring the model of Zheliang to Xiushui with the overall accuracy of 90.3% and false negative rate of 7.8%. These results demonstrate the feasibility of the developed system for RFS identification with a low detecting cost. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Two genotypes of rice seeds <span class="html-italic">Zheliang</span> (<b>a</b>) and <span class="html-italic">Xiushui</span> (<b>b</b>) with different infected degrees of rice false smut (RFS).</p>
Full article ">Figure 2
<p>Schematic overview of the analytical procedure for rice false smut (RFS) disease detection.</p>
Full article ">Figure 3
<p>The linearities of CCD under different exposure time (<b>a</b>) and illuminance distribution (<b>b</b>) at wavelengths of 460, 520, 660, 740, 850, and 940 nm, respectively, at a working distance of 18 cm.</p>
Full article ">Figure 4
<p>Mean reflectance spectra of healthy and rice false smut (RFS) infected rice seeds of <span class="html-italic">Zheliang</span> (<b>a</b>) and <span class="html-italic">Xiushui</span> (<b>b</b>). Principal component analysis of reflectance at six wavelengths in healthy, slightly, and severely infected rice seeds of <span class="html-italic">Zheliang</span> (<b>c</b>) and <span class="html-italic">Xiushui</span> (<b>d</b>).</p>
Full article ">Figure 5
<p>The overall accuracies and false negative rates from the least squares-support vector machine (LS-SVM) for rice false smut (RFS) disease detection in <span class="html-italic">Xiushui</span> based on the model established from <span class="html-italic">Zheliang</span>.</p>
Full article ">
20 pages, 3820 KiB  
Article
Learning Deep Hierarchical Spatial–Spectral Features for Hyperspectral Image Classification Based on Residual 3D-2D CNN
by Fan Feng, Shuangting Wang, Chunyang Wang and Jin Zhang
Sensors 2019, 19(23), 5276; https://doi.org/10.3390/s19235276 - 29 Nov 2019
Cited by 71 | Viewed by 5649
Abstract
Every pixel in a hyperspectral image contains detailed spectral information in hundreds of narrow bands captured by hyperspectral sensors. Pixel-wise classification of a hyperspectral image is the cornerstone of various hyperspectral applications. Nowadays, deep learning models represented by the convolutional neural network (CNN) [...] Read more.
Every pixel in a hyperspectral image contains detailed spectral information in hundreds of narrow bands captured by hyperspectral sensors. Pixel-wise classification of a hyperspectral image is the cornerstone of various hyperspectral applications. Nowadays, deep learning models represented by the convolutional neural network (CNN) provides an ideal solution for feature extraction, and has made remarkable achievements in supervised hyperspectral classification. However, hyperspectral image annotation is time-consuming and laborious, and available training data is usually limited. Due to the “small-sample problem”, CNN-based hyperspectral classification is still challenging. Focused on the limited sample-based hyperspectral classification, we designed an 11-layer CNN model called R-HybridSN (Residual-HybridSN) from the perspective of network optimization. With an organic combination of 3D-2D-CNN, residual learning, and depth-separable convolutions, R-HybridSN can better learn deep hierarchical spatial–spectral features with very few training data. The performance of R-HybridSN is evaluated over three public available hyperspectral datasets on different amounts of training samples. Using only 5%, 1%, and 1% labeled data for training in Indian Pines, Salinas, and University of Pavia, respectively, the classification accuracy of R-HybridSN is 96.46%, 98.25%, 96.59%, respectively, which is far better than the contrast models. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Illustration of the proposed R-HybridSN (Where FC is the fully connected layer).</p>
Full article ">Figure 2
<p>The overall 2D convolution operation. The input data dimension is <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>×</mo> <mi>H</mi> <mo>×</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo> </mo> <mo> </mo> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </semantics></math> is the channel number; the 2D convolution kernel size is <math display="inline"><semantics> <mrow> <mi mathvariant="normal">k</mi> <mo>×</mo> <mi mathvariant="normal">k</mi> </mrow> </semantics></math> and it denotes the coverage of convolution kernel over spatial dimension in each convolution operation; the output data dimension of single kernel is <math display="inline"><semantics> <mrow> <mfenced> <mrow> <mi>W</mi> <mo>−</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfenced> <mfenced> <mrow> <mi>H</mi> <mo>−</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfenced> </mrow> </semantics></math> and the final output data generated by <math display="inline"><semantics> <mi mathvariant="normal">p</mi> </semantics></math> kernels is a 3D tensor.</p>
Full article ">Figure 3
<p>The overall 3D convolution operation. The input data dimension is <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>×</mo> <mi>H</mi> <mo>×</mo> <mi>B</mi> <mo>×</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo> </mo> <mo> </mo> </mrow> </semantics></math>, where <math display="inline"><semantics> <mi mathvariant="normal">B</mi> </semantics></math> is the band number and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </semantics></math> is the channel number; the 3D convolution kernel size is <math display="inline"><semantics> <mrow> <mi mathvariant="normal">k</mi> <mo>×</mo> <mi mathvariant="normal">k</mi> <mo>×</mo> <mi mathvariant="normal">k</mi> </mrow> </semantics></math> and the last <math display="inline"><semantics> <mi mathvariant="normal">k</mi> </semantics></math> denotes the coverage of convolution kernel over spectral dimension in each convolution operation; if padding is not used and the stride is 1, then the output data dimension of single kernel is <math display="inline"><semantics> <mrow> <mfenced> <mrow> <mi>W</mi> <mo>−</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfenced> <mfenced> <mrow> <mi>H</mi> <mo>−</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfenced> <mfenced> <mrow> <mi>B</mi> <mo>−</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfenced> </mrow> </semantics></math> and the final output data generated by <math display="inline"><semantics> <mi mathvariant="normal">p</mi> </semantics></math> kernels is a 4D tensor.</p>
Full article ">Figure 4
<p>The overall depth-separable convolution operation. Different from traditional 2D convolution, depth-separable convolution can be divided to depthwise convolution and pointwise convolution.</p>
Full article ">Figure 5
<p>Schematic diagrams of two types of residual connections: (<b>a</b>) Identity connection; (<b>b</b>) non-identity connection using convolutional layer for dimension adjustment.</p>
Full article ">Figure 5 Cont.
<p>Schematic diagrams of two types of residual connections: (<b>a</b>) Identity connection; (<b>b</b>) non-identity connection using convolutional layer for dimension adjustment.</p>
Full article ">Figure 6
<p>The classification maps of Indian Pines. (<b>a</b>) Ground truth. (<b>b</b>–<b>g</b>) Predicted classification maps for 2D-CNN, M3D-CNN, HybridSN, Model A, Model B, and R-HybridSN, respectively.</p>
Full article ">Figure 7
<p>The classification maps of Salinas. (<b>a</b>) Ground truth. (<b>b</b>–<b>g</b>) Predicted classification maps for 2D-CNN, M3D-CNN, HybridSN, Model A, Model B, and R-HybridSN, respectively.</p>
Full article ">Figure 8
<p>The classification maps of University of Pavia. (<b>a</b>) Ground truth. (<b>b</b>–<b>g</b>) Predicted classification maps for 2D-CNN, M3D-CNN, HybridSN, Model A, Model B, and R-HybridSN, respectively.</p>
Full article ">Figure 9
<p>Training accuracy curves in six consecutive experiments (<b>a</b>) Model B_1.2%, OA = 93.37%; (<b>b</b>) Model B_1%, OA = 94.01%; (<b>c</b>) R-HybridSN_1.2%, OA = 97.52%; (<b>d</b>) R-HybridSN_1%, OA = 97.03%; (<b>e</b>) Model A_1.2%, OA = 95.95%; (<b>f</b>) Model A_1% OA = 96.39%.</p>
Full article ">
22 pages, 4427 KiB  
Article
Radiometric Assessment of a UAV-Based Push-Broom Hyperspectral Camera
by M. Alejandra P. Barreto, Kasper Johansen, Yoseline Angel and Matthew F. McCabe
Sensors 2019, 19(21), 4699; https://doi.org/10.3390/s19214699 - 29 Oct 2019
Cited by 34 | Viewed by 6160
Abstract
The use of unmanned aerial vehicles (UAVs) for Earth and environmental sensing has increased significantly in recent years. This is particularly true for multi- and hyperspectral sensing, with a variety of both push-broom and snap-shot systems becoming available. However, information on their radiometric [...] Read more.
The use of unmanned aerial vehicles (UAVs) for Earth and environmental sensing has increased significantly in recent years. This is particularly true for multi- and hyperspectral sensing, with a variety of both push-broom and snap-shot systems becoming available. However, information on their radiometric performance and stability over time is often lacking. The authors propose the use of a general protocol for sensor evaluation to characterize the data retrieval and radiometric performance of push-broom hyperspectral cameras, and illustrate the workflow with the Nano-Hyperspec (Headwall Photonics, Boston USA) sensor. The objectives of this analysis were to: (1) assess dark current and white reference consistency, both temporally and spatially; (2) evaluate spectral fidelity; and (3) determine the relationship between sensor-recorded radiance and spectroradiometer-derived reflectance. Both the laboratory-based dark current and white reference evaluations showed an insignificant increase over time (<2%) across spatial pixels and spectral bands for >99.5% of pixel–waveband combinations. Using a mercury/argon (Hg/Ar) lamp, the hyperspectral wavelength bands exhibited a slight shift of 1-3 nm against 29 Hg/Ar wavelength emission lines. The relationship between the Nano-Hyperspec radiance values and spectroradiometer-derived reflectance was found to be highly linear for all spectral bands. The developed protocol for assessing UAV-based radiometric performance of hyperspectral push-broom sensors showed that the Nano-Hyperspec data were both time-stable and spectrally sound. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Workflow used in this study for assessing the performance of the push-broom hyperspectral sensor.</p>
Full article ">Figure 2
<p>Spectral signature for the dark current (covered lens) experiment with an exposure of 6 ms, showing (<b>a</b>) DN mean spectral signature, (<b>b</b>) standard deviation of the DN spectral signature, (<b>c</b>) radiance mean spectral signature, and (<b>d</b>) standard deviation of the radiance spectral signature.</p>
Full article ">Figure 3
<p>Temporal analysis for the dark current experiment with an exposure of 6 ms, showing (<b>a</b>) percentage of variation on a range from −2.5% to 5.6 between the first and the last radiance measurements, and (<b>b</b>) time series analysis of the percentage of variation from the first measurement. The gray region represents the area enclosed between the maximum and minimum values of relative difference. The global mean is depicted in black, the mean for the red-green-blue (RGB) regions of the spectrum are depicted in red, green, and blue, respectively, and the near-infrared (NIR) region is shown in yellow.</p>
Full article ">Figure 4
<p>Spectral signature for the white reference with an exposure of 12.5 ms, showing (<b>a</b>) DN mean spectral signature, (<b>b</b>) standard deviation of the DN spectral signature, (<b>c</b>) radiance mean spectral signature, (<b>d</b>) standard deviation of the radiance spectral signature, and (<b>e</b>) Spectralon radiance response.</p>
Full article ">Figure 5
<p>Temporal analysis for the white reference experiment at an exposure of 12.5 ms, showing (<b>a</b>) percentage of variation between the first and the last radiance measurement at 30 min, and (<b>b</b>) time series analysis of the percentage of variation in radiance from the first measurement. The gray region represents the area enclosed between the maximum and minimum values of relative difference. The global mean is depicted in black, the mean for the RGB regions of the spectrum are depicted in red, green, and blue, respectively, and the NIR region is shown in yellow.</p>
Full article ">Figure 6
<p>Normalized intensity comparison between Hg lamp (<b>a</b>) and Ar lamp (<b>b</b>) emission lines (vertical black lines, see <a href="#sensors-19-04699-t002" class="html-table">Table 2</a> for reference values) and recorded spectra with the ASD Fieldspec-4 (blue curve) and the Nano-Hyperspec sensor (orange).</p>
Full article ">Figure 7
<p>Residuals vs. fitted reflectance plots for (<b>a</b>) Spectralon, (<b>b</b>) Masonite, and (<b>c</b>) oak plywood panels. Blue dots correspond to a wavelength of 447 nm in the blue region of the spectrum, green dots to a wavelength of 554 nm in the green region, red dots to a wavelength of 652 nm in the red region, and yellow points to a wavelength of 850 nm in the NIR region. The legend is accompanied by R<sup>2</sup> of the regression for the specific wavelengths.</p>
Full article ">
Back to TopTop