[go: up one dir, main page]

Next Article in Journal
Are Morphometric Alterations of the Deep Neck Muscles Related to Primary Headache Disorders? A Systematic Review
Next Article in Special Issue
LeafSpec-Dicot: An Accurate and Portable Hyperspectral Imaging Device for Dicot Leaves
Previous Article in Journal
A Novel Method to Model Image Creation Based on Mammographic Sensors Performance Parameters: A Theoretical Study
Previous Article in Special Issue
Active and Low-Cost Hyperspectral Imaging for the Spectral Analysis of a Low-Light Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral and Multispectral Image Fusion with Automated Extraction of Image-Based Endmember Bundles and Sparsity-Based Unmixing to Deal with Spectral Variability

by
Salah Eddine Brezini
1,2,* and
Yannick Deville
1
1
Institut de Recherche en Astrophysique et Planétologie (IRAP), Université de Toulouse, UPS-CNRS-CNES, 31400 Toulouse, France
2
Laboratoire Signaux et Images, Université des Sciences et de la Technologie d’Oran Mohamed Boudiaf, Bir El Djir, Oran 31000, Algeria
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(4), 2341; https://doi.org/10.3390/s23042341
Submission received: 14 January 2023 / Revised: 15 February 2023 / Accepted: 17 February 2023 / Published: 20 February 2023
(This article belongs to the Special Issue Hyperspectral Sensors, Algorithms and Task Performance)
Figure 1
<p>True-color image composite for the synthetic dataset. (<b>a</b>) Original Hyperspectral image; (<b>b</b>) Low-spectral-resolution multispectral image; (<b>c</b>) Low-spatial-resolution hyperspectral image.</p> ">
Figure 2
<p>True-color image composite for the real dataset. (<b>a</b>) Low-spatial-resolution hyperspectral image; (<b>b</b>) High-spatial-resolution pansharpened multispectral image.</p> ">
Figure 3
<p>Extracted spectral library from the synthetic data by AEEB.</p> ">
Figure 4
<p>Band-wise PSNR for the synthetic dataset.</p> ">
Figure 5
<p>True-color image composite for the synthetic dataset. (<b>a</b>) Original hyperspectral image; (<b>b</b>) Obtained HSB-SV sharpened hyperspectral image; (<b>c</b>) Obtained HMF-IPNMF sharpened hyperspectral image; (<b>d</b>) Obtained HySure sharpened hyperspectral image; (<b>e</b>) Obtained CNMF sharpened hyperspectral image; (<b>f</b>) Obtained FuVar sharpened hyperspectral image.</p> ">
Figure 6
<p>Spectral band in the <math display="inline"><semantics> <mrow> <mn>0.850</mn> <mo> </mo> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> region (<b>a</b>) Original hyperspectral image; (<b>b</b>) Obtained HSB-SV sharpened hyperspectral image; (<b>c</b>) Obtained HMF-IPNMF sharpened hyperspectral image; (<b>d</b>) Obtained HySure sharpened hyperspectral image; (<b>e</b>) Obtained CNMF sharpened hyperspectral image; (<b>f</b>) Obtained FuVar sharpened hyperspectral image.</p> ">
Figure 7
<p>Spectral library extracted from the real data by AEEB.</p> ">
Figure 8
<p>True-color image composite for fusion products derived for the real dataset. (<b>a</b>) Obtained HSB-SV sharpened hyperspectral image; (<b>b</b>) Obtained HMF-IPNMF sharpened hyperspectral image; (<b>c</b>) Obtained HySure sharpened hyperspectral image; (<b>d</b>) Obtained CNMF sharpened hyperspectral image; (<b>e</b>) Obtained FuVar sharpened hyperspectral image.</p> ">
Figure 9
<p>Spectral band in the <math display="inline"><semantics> <mrow> <mn>0.854</mn> <mo> </mo> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> region. (<b>a</b>) Obtained HSB-SV sharpened hyperspectral image; (<b>b</b>) Obtained HMF-IPNMF sharpened hyperspectral image; (<b>c</b>) Obtained HySure sharpened hyperspectral image; (<b>d</b>) Obtained CNMF sharpened hyperspectral image; (<b>e</b>) Obtained FuVar sharpened hyperspectral image.</p> ">
Versions Notes

Abstract

:
The aim of fusing hyperspectral and multispectral images is to overcome the limitation of remote sensing hyperspectral sensors by improving their spatial resolutions. This process, also known as hypersharpening, generates an unobserved high-spatial-resolution hyperspectral image. To this end, several hypersharpening methods have been developed, however most of them do not consider the spectral variability phenomenon; therefore, neglecting this phenomenon may cause errors, which leads to reducing the spatial and spectral quality of the sharpened products. Recently, new approaches have been proposed to tackle this problem, particularly those based on spectral unmixing and using parametric models. Nevertheless, the reported methods need a large number of parameters to address spectral variability, which inevitably yields a higher computation time compared to the standard hypersharpening methods. In this paper, a new hypersharpening method addressing spectral variability by considering the spectra bundles-based method, namely the Automated Extraction of Endmember Bundles (AEEB), and the sparsity-based method called Sparse Unmixing by Variable Splitting and Augmented Lagrangian (SUnSAL), is introduced. This new method called Hyperspectral Super-resolution with Spectra Bundles dealing with Spectral Variability (HSB-SV) was tested on both synthetic and real data. Experimental results showed that HSB-SV provides sharpened products with higher spectral and spatial reconstruction fidelities with a very low computational complexity compared to other methods dealing with spectral variability, which are the main contributions of the designed method.

1. Introduction

The continuous progress of remote sensing sensors allows one to have a better understanding of the different phenomena surrounding us [1]. In particular, the remote sensing hyperspectral images acquired by high-spectral-resolution sensors consist of hundreds of contiguous spectral bands ranging from the visible to infrared wavelength domains. The hyperspectral sensors can be either onboard spaceborne platforms including EO-1/Hyperion [2], AVIRIS [3], PRISMA [4], HISUI [5], EnMAP [6] or onboard aircrafts equipped with such sensors as Hyspex [7], AVIRIS-NG [8], and APEX [9]. The data provided by these sensors having a high spectral resolution deliver useful information that enable an accurate classification and precise detection of pure materials (also called endmembers) in the observed scene. This fine spectral resolution permits the use of hyperspectral images (HSI) in countless different fields [10] including monitoring of coastal areas [11,12], measuring gas flaring [13,14], estimation of the area of photovoltaic panels [15], mineral detection, and mapping [16].
However, because the acquisition of HSIs is realized in narrow bandwidths, it constrains the remote sensing hyperspectral sensors to operate in such a way as to achieve an optimal trade-off between satisfactory signal-to-noise ratio (SNR) [17] and spatial resolution. In other words, the remote sensing hyperspectral sensors must obtain enough photons to retain an acceptable SNR [18]. This physical limitation provides HSIs with low spatial resolution and consequently hinders their use [19] in applications requiring both high spectral and spatial resolutions like classification or vegetation monitoring.
A straightforward manner to circumvent such a limitation is to fuse an HSI with a multispectral image (MSI) of the observed scene, acquired approximately at the same time. As a matter of fact, the MSIs exhibit a high spatial resolution compared to the HSIs and have a low spectral resolution (they are acquired over at most around ten spectral bands). Ideally, the main objective is to provide an unobserved high-spatial-resolution hyperspectral image by using the spatial information contained in the MSI while preserving as much as possible the spectral fidelity of HSI. This fusion process is known as hypersharpening [20] and can be seen as an extension of pansharpening which consists of merging a panchromatic image (PAN) with an MSI or HSI. Nevertheless, pansharpening appears more complex to achieve compared to hypersharpening due to the significant gap between the spectral domains covered by the PAN and HSI images [21].
Various hypersharpening approaches have been developed and among them ones using a Bayesian formulation [22,23,24]. Recently, a novel hypersharpening scheme was introduced and tested on WordlView-3 data [25]. Tensor representations have also been considered for the fusion process [26,27,28,29,30]. Other methods are based on sparse regression [18,19]. Currently, Deep Learning (DL) techniques are extensively used for hypersharpening. Some techniques are based on Convolutional Neural Networks (CNN). In [31], the authors proposed a Spatial-Spectral Reconstruction Network (SSR-Net) trained by optimizing both spatial and spectral edge losses. In [32], a new loss function called RMSE, angle and Laplacian (RAP) to reduce the spectral-spatial distortions was introduced. Even though the CNN methods prove their effectiveness, these techniques are not always suitable for real scenarios [33]. Indeed, these networks are trained on simulated data, the Spectral Response Function (SRF) and Point Spread Function (PSF) are required to be known and generally are not always available in practical real scenarios. To overcome this problem in [34], the authors explicitly take into consideration the spectral low rank of the HSI. Other techniques consider Generative Adversarial Networks (GAN). In [35], an improved Super-Resolution GAN (SRGAN) was applied to remote sensing images. The GAN based methods are generally subject to spectral-spatial distortions due to the mode collapse inherent to the GAN [36,37]. In [37], a Latent Encoder Coupled GAN (LE-GAN) was proposed to improve the spectral-spatial fidelity of the fusion products. For more details, the reader can refer to the recent review devoted to DL based techniques for image fusion in [33].
Considerable emphasis has been put on methods based on Spectral Unmixing (SU) intended for hypersharpening [38,39,40,41]. Such approaches aim to extract the spectral information (the spectral signature of the endmembers) contained in the HSI and the spatial information (the abundance coefficients) included in the MSI. To this end, they employ the techniques developed in the field of Blind Source Separation (BSS), especially those using the Nonnegative Matrix Factorization (NMF) framework [42,43,44]. Most of these SU methods are based on the Linear Mixing Model (LMM) [1] mainly due to its simplicity. In particular, this model assumes that each endmember is described by only one spectral signature in the whole image. However, this assumption is no longer valid when some physical phenomena occur in the observed scene. Thus, the LMM appears rather limited by two main issues, namely: the spectral/intra class variability and the nonlinearity [45,46,47].
Currently, a growing attention is dedicated to tackle the spectral variability by introducing the notion of class of endmembers instead of the concept of endmembers. Several methods have been developed to this end, particularly using parametric models [47]. These models aim to integrate the spectral variability directly in the LMM, like in [48,49,50] which incorporate additive terms in the LMM or using scaling factors [51,52,53]. The nonlinearity can be caused by multiscattering effects or intimate interactions [54]. To overcome this limitation, authors in [55,56,57] have proposed a Linear Quadratic NMF (LQ NMF) or a Bilinear NMF. Models and algorithms employed for nonlinear unmixing are described in [54]. Indeed, neglecting the spectral variability and nonlinearity may spread error during hypersharpening and particularly the spectral variability. The present paper aims at addressing the spectral variability issue. Therefore, several methods based on SU have been investigated to tackle the spectral variability in the fusion process [28,30,58,59,60]. The cited methods provide hypersharpening products with an interesting spectral and spatial fidelities. However, these techniques present a high computational complexity particularly the methods based on parametric models. Indeed, they need a large number of parameters and variables to address spectral variability. To reduce the computational load of the above cited approaches, we propose a new hypersharpening method using a sequential strategy. The introduced method is based on spectra bundles (which are composed of the extracted set of spectral signatures) [61,62].
The introduced method called Hyperspectral Super-resolution with Spectra Bundles dealing with Spectral Variability (HSB-SV) considers the Automated Extraction of Image-Based Endmember Bundles or Automated Extraction of Endmember Bundles (AEEB) [61]. The AEEB method is a simple and efficient way to handle the spectral variability by building a spectral library from the pure material spectra extracted from the HSI. This enables one to construct a spectral dictionary compatible with the physics of the HSI as the candidate endmembers are estimated directly from the considered HSI. Moreover, the use of the AEEB method significantly reduces computational complexity. Then, to estimate the high-spatial-resolution abundance maps, a sparse regression technique is considered, namely Sparse Unmixing by Variable Splitting and Augmented Lagrangian (SUnSAL) [63]. The SUnSAL technique is applied using the down-sampled candidate endmember spectra and the MSI. Finally, the fusion product is obtained by combining the extracted high-resolution pure material spectra and the high-spatial-resolution abundance maps.
The main contributions of this paper are as follows:
  • Significantly reducing the processing time with respect to the hypersharpening methods addressing spectral variability.
  • Solving the hypersharpening problem by deriving a spectral library and applying a sparsity-based method to improve the spatial and spectral fidelities of the hypersharpening products.
  • Dealing with multiple types of spectral variabilities like illumination variations and intrinsic variability or caused by other phenomena since the physics of the considered scene is respected in the proposed approach by using the spectral signatures extracted directly from the considered HSI.
The remainder of the paper is structured as follows. Section 2 is devoted to the related works. In particular, the recent techniques incorporating the spectral variability in the fusion process by employing parametric models are described. The observation model based on the LMM and details of the proposed hypersharpening approach are introduced in Section 3. The Section 4 describes the synthetic and real data used for all the conducted experiments. In Section 5, the experimental results based on synthetic and real data are presented. The results of the designed approach are compared to those provided by some of the state-of-the-art methods, in particular the recent methods tackling spectral variability. Finally, Section 6 concludes this paper.

2. Related Works

In this section, some recent spectral unmixing hypersharpening techniques addressing spectral variability by means of parametric models are reported. The spectral variability is often induced by several factors such as:
  • Illumination changes, mainly caused by topography variations in the observed scene affecting the angles of the incident radiation.
  • Atmospheric conditions which alter the radiance measured by the hyperspectral sensors.
  • Intrinsic spectral variability caused by physicochemical differences especially in observed scenes constituted by vegetation.
The FuVar method [58] for HS-MS Image Fusion with spectral Variability, addresses the spectral variability in the case of seasonal spectral variability (inter-image). More precisely, the FuVar method considers a parametric model called the Generalized LMM (GLMM) [64]. The GLMM model translates the spectral variability as scaling factors that depend both on the pixel and the spectral band. This provides the GLMM the flexibility to handle the spectral variability. The GLMM is generally adopted for spectral variability caused by the illumination changes and seasonal changes [58]. The FuVar method considers the Alternating Direction Method of Multipliers (ADMM) to solve the fusion problem. Furthermore, the FuVar method appears very effective when fusing HSI and MSI with spatially uniform variations. However, the spectral variability cannot only be described by illumination and topography changes since spectral variability can be induced by several factors. Furthermore, methods based on parametric models (like FuVar) require substantial user supervision for tuning the involved parameters, which is a challenging task when it comes to addressing non-convex problems. Moreover, the FuVar method has a high computational cost since it takes in consideration a large amount of variables.
A recent approach addresses the spectral variability to merge HSI with MSI acquired approximately at the same time. This approach is known as Hyperspectral and Multispectral data fusion based on IP-NMF (HMF-IPNMF) [59]. This method applies the Inertia-Constrained Pixel-by-Pixel NMF (IP-NMF) [65] to extract, for each class of endmembers, slightly different spectral signatures for each pixel of the HSI. The IP-NMF method proves to be very attractive when it comes to handle spectral variability arising from intrinsic variability caused by physicochemical differences [12,65]. Furthermore, unlike the Coupled NMF (CNMF) [38] or Joint-Criterion NMF (JCNMF) [40] which use alternating or joint iterative algorithms, HMF-IPNMF considers a simple sequential strategy composed of three main stages. The first one is the extraction of hyperspectral endmember spectra via IP-NMF [65]. The second stage consists of estimating high-spatial-resolution abundance fractions through a linear regression, using the Fully Constrained Least Square (FCLS) method [66]. The last stage combines both results of the first and second stages to obtain the fusion product. Nevertheless, HMF-IPNMF uses specific matrix structures involving many variables to describe the HSI, which leads to a significant processing time.
A recent hypersharpening technique [67] extends the JCNMF [40] method to handle the spectral variability by exploiting the same specific matrix structure of the used matrices in IP-NMF [65]. The JCNMF method, contrary to CNMF, simultaneously unmixes the HSI and MSI. Moreover, JCNMF exploits the spatial degradation between the MSI and HSI. The degradation operator can be considered as a blurring-decimation matrix containing Gaussian filter values. Ideally, this operator can represent the PSF. This degradation model is used to generate realistic synthetic data in the conducted experiments. Nonetheless, this joint unmixing implies that not only the HSI but also the MSI is described by the above-mentioned specific structures of matrices which inevitably increases the number of variables and leads to a high computational time.
Another method accounting for spectral variability and called FSVA [60] has also been proposed. Like FuVar, FSVA is based on the parametric model known as the Augmented LMM (ALMM) [53] and solves the fusion problem by using an alternating strategy (ADMM). The ALMM is an extension of the Extended LMM (ELMM) [68] obtained by adding a low rank term to the ELMM to describe more complex spectral variability. This feature permits FSVA to simultaneously handle the scaling factors, intrinsic variability and nonlinearity, which leads to performance improvement. Furthermore, FSVA tries to combine a spatio-spectral degradation model and the spectral variability model. As for SU based methods dealing with spectral variability, it requires a large number of variables leading to a high processing time.

3. Proposed Approach

3.1. Observation Model

For a proper understanding, we summarize the required principles of LSU [1] in this section because the proposed method circumvents one of the main limitations of LSU namely the spectral variability. The LSU assumes that any observed pixel of the HSI or MSI corresponds to a linear mixture between the endmember spectra, weighted by the associated abundance coefficients following the observation models
X h = S h A h ,
X m = S m A m ,
where X h R + L h × P h and X m R + L m × P m are the observed hyperspectral and multispectral images, respectively; P stands for the number of pixels and L the number of spectral bands, with h and m indices referring to the hyperspectral and multispectral images, respectively; A h R + N × P h and A m R + N × P m are the spatially degraded associated abundance fractions of the HSI and associated abundance fractions of the MSI; N represents the number of endmembers. For the sake of clarity, N is assumed to be the same for both HSI and MSI. The estimated hyperspectral and spectrally degraded multispectral endmember spectra are denoted by S h R + L h × N and S m R + L m × N .
The spatially degraded abundances A h and the spectrally degraded endmember spectra S m are here modelled as
A h = A m F ,
S m = R S h .
The matrix F R + P m × P h represents the Point Spread Function (PSF) and R R + L m × L h the Spectral Response Function (SRF). These two functions have a significant role during the fusion process to respect the physics. Indeed, to preserve the physical meaning of the fusion process, we must consider the sensor spectral response for each band [69]. It is not physically meaningful if the SRFs of the considered sensors for the fusion do not overlap [69]. Furthermore, the hypersharpening aims to provide a fused product from an ideal virtual sensor that would combine the spectral sensitivity of the hyperspectral sensor and the high spatial resolution of the multispectral sensor [69].

3.2. Description of HSB-SV

As mentioned above, the proposed approach is based on a sequential strategy used in HMF-IPNMF framework. It is divided in three main parts: (1) Estimation of the Hyperspectral Endmember Spectra by employing the AEEB method; (2) Estimation of High-Spatial-Resolution Abundance Fractions by means of the SUnSAL method; and (3) Fusion stage.

3.2.1. Extraction of Spectral Library by AEEB

The first part of the introduced technique aims to estimate endmembers from HSI by considering the spectral variability. More precisely, the objective is to extract a spectral dictionary. The main motivation behind the use of a spectral library is mainly due that it can deal with different types of spectral variabilities. Indeed, hypersharpening algorithms based on parametric models like FuVar rely on the assumption that the spectral variability can be described by only considering scaling factors, which is not always relevant. The most significant example showcasing this is an HSI describing urban areas composed by various pure materials including vegetation (green spaces), tile roofs, architectural monuments, small streets, etc. In this case [65], the spectral variability is arising from different causes. Building a spectral library directly from HSI allows one to handle the spectral variability efficiently and effectively by considering different factors. Constructing a spectral dictionary from a HSI is a simple task which consists of applying an Endmember Extraction Algorithm (EEA) [1] to random subsets of the HSI to obtain multiple signatures of each pure material in the observed scene. Moreover, using EEA techniques reduces undoubtedly the computational load compared to the hypersharpening methods cited in Section 2. Basically, the AEEB method is built on the assumption that the statistics of the HSI can be approximately recovered with a small fraction of it [61]. In other words, if enough pure pixels are present in the HSI then they will be available in the randomly selected subset from that HSI [61,70]. The validity of such an assumption relies on the size of the subsets and the number of pure pixels present in the observed scene [61].
Due to its simplicity, the AEEB method allows one to have an efficient representation of the spectral variability with a very low computational cost. Recent methods proposed to directly incorporate the spectra bundles (which are composed of the extracted set of spectral signatures for each run and for each class of endmembers) into the LMMs [71]. The endmember bundles can be expressed as [71]
B = [ B 1 | B 2 | | B j ]
where B j R + L h × Y j denotes the bundle representing the j -th class, J is the number of classes, Y j is the number of pure spectra in the j -th class and Y the total number of endmember spectra of all classes with Y = j = 1 J Y j .
Thus, an observed hyperspectral pixel spectrum x h i R + L h × 1 is expressed as
x h i = B a h i
where a h i R + Y × 1 stands for the abundance coefficients corresponding to each individual spectrum of the endmember bundles B .
Furthermore, to apply the AEEB method some parameters must be fixed a priori like the number of pure materials present in each subset, the number of subsets and then their sizes. The number of pure materials represents the number of endmembers present in each subset. The number of subsets corresponds to the number of subsets randomly selected from the HSI that is used by the AEEB method to provide the spectral library. The size of each subset is the number of pixels. The performance of the AEEB method (and the methods based on a spectral library) is related crucially to the presence of sufficient number of pure pixels in the HSI to have a coherent description of the spectral variability present in the scene. For all the conducted experiments, we use the well-known Vertex Component Analysis (VCA) [72] method as EEA. Indeed, VCA is a fast EEA which also permits to reduce the processing times.
The complete algorithm of HSB-SV is described in Algorithm 1.
Algorithm 1. Hyperspectral Super-resolution with Spectra Bundles dealing with Spectral Variability (HSB-SV).
Input: hyperspectral image X h and multispectral image X m .
Output: the unobservable sharpened high-spatial-resolution hyperspectral image X f ˜ .
  • Set the number of pure materials.
  • Set the number of subsets representing the number of applied runs of AEEB.
  • Set the size of subsets used by the AEEB method.
  • Extract B from X h by running AEEB over the selected number of subsets.
  • Deduce B m by downsampling B using (4).
  • Obtain a m i by solving (7) using SUnSAL.
  • Recombine B and a m i by using (8) to obtain X f ˜ .

3.2.2. Estimation of High-Spatial-Resolution Abundance Fractions

The second stage of the introduced method aims at extracting the high-spatial-resolution abundance fractions (stored in A m ) in each pixel individually from the MSI. To this end, the sparsity regression-based method called SUnSAL [63] is applied. The main goal of sparse regression-based techniques is to estimate abundance coefficients from a large spectral library already available. Indeed, a small number of endmembers are active in a given pixel. Therefore, the sparse unmixing allows one to obtain a linear combination of pure material spectra for each of the observed remote sensing spectra. In other words, the sparse unmixing tries to estimate the optimal subset of pure materials in the spectral library than can best represent each mixed pixel in the observed scene [1]. Moreover, sparse unmixing methods are generally efficient in terms of computational cost [47]. This feature decreases quite significantly the processing time of HSB-SV especially compared to the hypersharpening methods addressing spectral variability. The performance of sparse unmixing methods relies on the availability of suitable spectral libraries [1]. In our case, the spectral library was extracted directly from the HSI and consequently allows to improve the performance of the SUnSAL method.
For each multispectral pixel x m i , the high-spatial-resolution abundance fractions associated a m i (forming part of A m ) associated with the i -th pixel is estimated by means of the SUnSAL method, which is used between the multispectral image X m and the multispectral spectra bundles forming the matrix B m . The matrix B m is derived from B extracted in the first stage in the same way as in (4) using the SRF of the considered sensors in the experiments. The SRF can be known or estimated. Indeed, only a few instances of the dictionary B m are used to reconstruct a pixel spectrum. The objective of the SUnSAL method is to estimate the high-spatial-resolution abundance fractions a m i by optimizing the following cost function, separately for each pixel
min   a m i B m a m i x m i 2 2 + λ a m i 1
where · 2 2 is the 2 -norm and · 1 is the 1 -norm which is responsible for promotingsparsity. λ is a non-negative parameter which tunes the relative weight between the 1 and 2 terms of (7).
The SUnSAL method makes use of the ADMM to optimize (7). As in the first stage of our method, the main motivation behind applying the SUnSAL method is that it can deliver efficient results with low computational complexity.

3.2.3. Fusion

The third and last stages of the proposed approach consists of creating the fusion product X f ˜ by recombining the obtained matrices. Hence, each pixel spectrum x f i ˙ ˜ of the unobservable sharpened high-spatial-resolution hyperspectral image X f ˜ is defined as
x f i ˙ ˜ = B a m i

4. Datasets

For the conducted experiments, two sets of data were considered, namely synthetic and real data. We chose data with a small size because HMF-IPNMF and FuVar involve important memory capacity and computation cost as these two methods use large size matrices as suggested by the authors of the corresponding methods [58,59].

4.1. Synthetic Data

The synthetic data sets were obtained from a real airborne high spatial and spectral resolution hyperspectral image [73]. This real hyperspectral image covers the spectral domain 0.35 1.05   μ m with 144 wavelengths. More precisely, we used a subset (Figure 1) of this hyperspectral image with 100 × 100 pixels [59]. This subset was constituted by seven classes of endmembers with spectral variability. The selected subset was used to obtain synthetic data sets, more precisely to create two synthetic images by means of Wald’s protocol [74]. To this end, the subset was spatially and spectrally degraded to obtain the low-spatial-resolution hyperspectral image and the low-spectral-resolution multispectral image respectively. The low-spatial-resolution hyperspectral image was generated by degrading the real hyperspectral image by a factor of 2. This degradation was applied by considering a blurring-decimation matrix with a Gaussian filter like in [40] (this can represent the PSF in (3)). The low-spectral-resolution multispectral image was created by spectrally degrading the original image by using the ENVI software and considering the SRF of the QuickBird sensor (Table 1).

4.2. Real Data

We also considered real data for the conducted experiments, specifically a real hyperspectral and a real multispectral image. These two images were acquired on the same day (3 March 2003) and at the same time [40,59]. These real data were geometrically coregistered and radiometrically corrected and cover a small part of the urban area of Oran (Algeria). These images were mainly composed of seven classes of endmembers. The low-spatial-resolution hyperspectral image was acquired by the Earth Observing-1 (EO-1) [2] Hyperion sensor with 125 spectral bands and 30 × 30 pixels (Figure 2a). This hyperspectral image presented a spatial resolution of 30 m. The high-spatial-resolution pansharpened multispectral image was acquired by the EO-1 Advanced Land Imager (ALI) with 9 spectral bands (Table 2) and 90 × 90 pixels (Figure 2b). The multispectral image had a 10 m spatial resolution which represents a scale factor of three between the hyperspectral and multispectral images.

5. Experiments

5.1. Performance Criteria

To evaluate the performance of the proposed method and the tested state-of-the-art methods, various metrics were employed. For the synthetic data set, the obtained sharpened hyperspectral products X f ˜ from the introduced method and the tested state-of-the-art techniques were compared to the reference image X using spectral and spatial performance criteria. The first quality measure for the synthetic data is the Spectral Angle Mapper (SAM). The SAM at the i-th pixel is obtained as follows [77]
S A M i = a r c o s   ( x i ,     x f i ˙ ˜ x i 2 . x f i ˙ ˜ 2 )
with i = 1   P . The average value of the SAM over all pixels was used to determine the quality of the fusion product. The lower the value of the SAM, the better the method.
The second performance criterion was the Spectral Normalized Mean Square Error N M S E λ [59]
N M S E λ i = x i x f i ˙ ˜ 2 x i 2
As for the SAM, the average value of the N M S E λ   over all pixels was used to determine the spectral quality of the fusion product. The ideal value for the N M S E λ is 0.
The third performance criterion was the Spatial Normalized Mean Square Error N M S E S [59]
N M S E s k = X k X f K ˜ 2 X k 2
where X k and X ˜ f k are the k-th spectral band of the reference hyperspectral image and the estimated sharpening product. The average value of the N M S E S over all spectral bands was used to determine the spatial quality of the fusion product. The ideal value for the N M S E S is 0.
We also used the Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS) defined in [77]
E R G A S = 100 r 1 L h l = 1 L h ( R M S E l μ ( X f l ˜ ) ) 2
where r is the spatial factor between MSI and HSI, μ ( X f l ˜ ) the mean of the estimated image and R M S E l represents the Root Mean Square Error. The ideal value for the ERGAS is 0.
The Peak Signal to Noise Ratio (PSNR) was also used [77]
P S N R l = 20 × l o g 10 ( m a x ( X f l ˜ ) R M S E l )
where R M S E l represents the Root Mean Square Error. A higher value of PSNR means a better spatial reconstruction.
The last quality metric used for the synthetic data was the Universal Image Quality Index (UIQI) [77]
U I Q I ( X   ,   X ˜ f ) = 4   δ X   X ˜ f   .   μ ( X )   .   μ ( X ˜ f ) ( σ X 2 + σ X ˜ f 2 )   ( μ ( X ) 2 + μ ( X ˜ f ) 2 )
where δ X   X ˜ f is the covariance between the reference image X and the estimated image X f ˜ . σ X 2 and σ X ˜ f 2 are their variances. μ ( X ) and μ ( X f ˜ ) denote their means. The UIQI varies between −1 and 1. Its ideal value is 1 which indicates a perfect reconstruction.
For the real data, other types of metrics were used as there is no ground truth for such data. The Modified Quality with no Reference criterion (mQNR) [40] was considered. The mQNR is based on the Quality with no Reference (QNR) [78] and was modified to incorporate the hypersharpening process. The mQNR is given by
m Q N R = ( 1 D λ ) σ ( 1 D s ) ρ
where σ and ρ are real-valued exponents set to 1 for the test conducted on real data. D S and D λ represent the spatial and spectral distortion indices. The spectral distortion index D λ reads [40]
D λ = 1 L h ( L h 1 ) j = 1 L h r = 1 ,   r j L h | UIQI ( X ˜ f j , X ˜ f r ) UIQI ( X h j , X h r ) | ω ω
where ω is a positive exponent set to 1 for the experiments. X f is a spectral band of the fusion product and X h is a spectral band of the reference hyperspectral image.
The spatial distortion index D s was obtained as follows. A subindex was first calculated as mentioned in [78] between each multispectral band and hyperspectral bands covered by the same multispectral band without considering the hyperspectral bands outside the spectral range of the multispectral image. The final spatial distortion index D s is the mean of the estimated subindices.

5.2. Results and Discussion

5.2.1. Results for Synthetic Dataset

The first part of the tests was devoted to the synthetic dataset. The regularization parameters, for HMF-IPNMF [59,65], HySure [41,77] and FuVar [58], considered for the synthetic dataset are reported in Table 3.
For the HMF-IPNMF method, the maximum number of iterations of IP-NMF was set to 100. For the CNMF method, the numbers of iterations for the inner and outer loops were fixed to 100 and 3, respectively. For the FuVar and HySure methods, the blurring kernel was assumed to be known a priori. As an initialization step for all the state-of-the-art methods, the VCA method was applied. The FCLS method was used to initialize the abundance coefficients for the FuVar CNMF and HMF-IPNMF methods. As the CNMF and HySure methods do not consider the spectral variability, they were executed by fixing the number of pure materials to 30 as suggested in [77]. This allows them to have more flexibility and to somehow manage the spectral variability although they are not designed for it. The HMF-IPNMF and FuVar methods were applied by considering the manually determined number of classes of endmembers equal to 7, because these two methods deal with the spectral variability.
For the HSB-SV method, all the fixed parameters are reported in Table 4. The numbers of pure materials and subsets were fixed to perform the spectral reconstruction while preserving the processing time as much as possible.
Finally, the CPU used in the conducted experiments was an Intel Core i5-8350U processor running at 1.70 GHz, with a memory capacity of 16 GB. The results of the quality metrics for the synthetic dataset are reported in Table 5. Figure 3 illustrates the spectra library extracted from the HSI by means of the AEEB method. Figure 3 clearly shows the presence of spectral variability and corroborates that this phenomenon must be taken into account.
Table 5 clearly shows that the methods considering the spectral variability yielded the best results, in particular HMF-IPNMF and HSB-SV. The FuVar method reached the highest value of NMSEλ with 14.42% which led to the worst spectral reconstruction in terms of spectral fidelity compared to the tested methods. This finding demonstrates that the spectral variability is not induced by only illumination or topography changes in the observed scene. The CNMF and HySure methods improved this aspect by obtaining an NMSEλ of 11.89% and 9.62%, respectively. The HMF-IPNMF method provided the best spectral reconstruction for the tested state-of-the-art methods with a value of SAM of 3.53° and NMSEλ equal to 7.92%. The HSB-SV method delivered the best overall results denoted by spectral performance criteria with the lowest values of the SAM (2.65°) and the NMSEλ (7.49%), which clearly demonstrates the superior performance of HSB-SV in terms of spectral fidelity compared to the tested state-of-art-methods. These findings show that considering a spectral library to estimate the hyperspectral spectra is very effective.
Furthermore, these findings were confirmed by the PSNR with the highest value (equal to 43.01) corresponding to a substantial gain of nearly 3 dB as compared with HMF-IPNMF and 10 dB with respect to the FuVar method. To illustrate the notable gain of HSB-SV, Figure 4 represents the PSNR of all methods for each spectral band for the synthetic dataset. This figure illustrates clearly that the HSB-SV method obtains the best values of the PSNR in almost all the spectral bands, particularly between band 20 and band 80.
The spatial quality metrics confirmed the findings obtained from the spectral metrics, the HSB-SV method outperforms the other methods in terms of spatial fidelity, with the highest value of the UIQI and the lowest value of the ERGAS. These results clearly show the attractiveness of using sparse regression to estimate high-spatial-resolution abundance fractions. Indeed, the AEEB method provides a quite suitable spectral library which improves the performance of the SUnSAL method.
These results demonstrate that using a spectral library improves the spectral reconstruction on one hand and it enhances the spatial fidelity on the other hand by providing SUnSAL with a suitable spectral library.
The processing times of the tested techniques are reported in Table 6. As expected, as the HMF-IPNMF and FuVar methods consider matrices with large sizes to tackle the spectral variability, their processing times were the highest with running times equal to 465.98 s and 363.64 s, respectively. The HSB-SV method achieved the best results with the lowest execution times when compared to the tested state-of-art methods. It should be noted that the high-quality spectral and spatial reconstructions obtained by the HSB-SV method do not come at the price of a higher processing time as it is reported in Table 6. Indeed, although the HSB-SV method deals with spectral variability like the HMF-IPNMF and FuVar methods, HSB-SV provided the best results with a computation time equal to 0.74 s. Moreover, the HSB-SV method had a lower computing time compared to the CNMF and HySure methods (with computational times around 3.20 s and 12.89 s, respectively) even though these methods are not considering spectral variability and are supposed to be more efficient in terms of computational cost. These findings prove the effectiveness of the HSB-SV method to deal with spectral variability during the hypersharpening process by providing products with high fidelity reconstruction at a very low processing cost. This significant reduction of the processing time is caused by two main elements. The first one is that the AEEB method is very efficient, in particular a fast algorithm (VCA) is considered for the extraction of endmembers. The second main reason is the efficiency of the SUnSAL method.
For the visual inspection of the obtained results, Figure 5 illustrates the true color composite of the synthetic dataset for all tested methods. We can observe clearly that the HSB-SV and HMF-IPNMF methods achieved the best spatial reconstruction when compared to the other methods, particularly compared to CNMF and FuVar, which have many spatial and spectral distortions present in the buildings area (red buildings).
To have a clearer view of the spatial gain obtained by the HSB-SV method (as it is difficult to make a clear conclusion from the color composite images), Figure 6 shows the obtained hypersharpening products for all the methods for the spectral band in the 0.850   μ m region. This figure clearly shows that the HSB-SV method comes with the lowest spatial distortions compared to the other tested methods and it is the closest to the reference hyperspectral image.

5.2.2. Results for Real Dataset

The second part of the tests is devoted to the real data. The regularization parameters were identical to those in the tests performed with the synthetic dataset for the state-of-the-art methods (Table 3). For the HSB-SV method, the used parameters are reported in Table 7. The spectra library extracted from HSI by means of the AEEB method is illustrated in Figure 7. Figure 7 clearly illustrates the presence of spectral variability in the real dataset and thus the notion of class of endmembers must be considered.
Table 8 reports the performance metrics for the real dataset. The CNMF method obtained the highest values for D λ and D s showing that it provided the lowest spectral and spatial fidelities of this benchmarking. The FuVar method improved this aspect with a better spatial and spectral reconstruction as it considered spectral variability. The HMF-IPNMF method achieved the best overall results among the tested state-of-the-art methods in particular for the m Q N R above 0.95 which proves that it handles the spectral variability better than the FuVar method (equal to 0.9288). The HSB-SV method outperformed all the other tested methods in terms of spectral and spatial fidelities. In particular, it achieved the best spatial reconstruction with D s equal to 0.0064. This spatial enhancement was mainly due to the SUnSAL method, which was applied by considering a suitable spectral library. Furthermore, the HSB-SV method obtained the best value of the m Q N R , equal to 0.9615. This finding proves that using a spectral library significantly improves the performance of spectral reconstruction with respect to the tested state-of-art methods.
The execution times for the methods applied to the real data are provided in Table 9. The HSB-SV method was significantly faster compared to the tested state-of-the-art methods and particularly the approaches tackling the spectral variability. As the HMF-IPNMF and FuVar methods are handling the spectral variability with modified LMMs involving matrices with large sizes, they came with the highest processing times, around 461.91 s (HSB-SV was about 2310 times faster than HMF-IPNMF) and 238.99 s (HSB-SV was around 1195 times faster than FuVar), respectively. The CNMF and HySure methods improved this aspect with running times around 1.63 s and 12.02 s, respectively, because they do not consider the spectral variability. However, the CNMF and HySure methods are slower compared to the HSB-SV method, in particular HySure (HSB-SV is around 60 times faster than HySure). The HSB-SV method presented the lowest execution time, below one second, i.e., around 0.20 s. Furthermore, the HSB-SV method brings obvious improvements in terms of spectral and spatial fidelities compared to the methods dealing with spectral variability. It provided a high fidelity with the lowest computational time. These significant improvements with respect to the state-of-the-art tested methods prove the attractiveness of using sparse regression to achieve the best spatial reconstruction at the lowest computational cost.
To illustrate the performance of the tested techniques, Figure 8 shows the true color composite of the obtained hypersharpening products. It can be seen clearly that the CNMF method presented more spatial distortions in the urban area in particular for the roads (roundabout region). The HMF-IPNMF and HySure methods improved this aspect with less spatial distortion and a better spatial fidelity for the urban area (roundabout region). The HSB-SV method achieved the best spatial reconstruction, in particular the roads were reconstructed with more spatial fidelity.
To have a better visual interpretation of the results, Figure 9 shows the obtained hypersharpening products for all the methods for the spectral band in the 0.854   μ m region. This figure demonstrates clearly that the HySure and HSB-SV methods obtained hypersharpening products with the best spatial reconstruction. In particular, the HSB-SV method provided a hypersharpening product with less spatial distortion. Finally, the HSB-SV method proves the effectiveness of the use of the spectral library with sparse regression to achieve the fusion of hyperspectral and multispectral images with both high spatial and spectral fidelities. Indeed, the HSB-SV method allows one to have a hypersharpening product at a very low computational cost compared to methods tackling the spectral variability like HMF-IPNMF and FuVar.

6. Conclusions

In this paper, a new hypersharpening method called Hyperspectral Super-resolution with Spectral Bundles dealing with Spectral Variability (HSB-SV) is introduced. This technique is related to spectra bundles, more precisely to the Automated Extraction of Endmember Bundles (AEEB) method. The AEEB method tackles the spectral variability by constructing a spectral library directly from the hyperspectral image by means of an Endmember Extraction Algorithm (EEA) applied to random subsets of the HSI. This straightforward and efficient approach allows one to have a spectral dictionary. Furthermore, it substantially reduces the number of manipulated variables when compared to the hypersharpening methods from the literature which treat the spectral variability. This directly impacts the execution time by reducing it significantly compared to the HMF-IPNMF and FuVar methods, which constitutes the main originality of this work. Indeed, the use of the Sparse Unmixing by Variable Splitting and Augmented Lagrangian (SUnSAL) method proves to be very attractive to estimate high-spatial-resolution abundance coefficients while preserving the processing time to a very low level.
The proposed technique was tested on synthetic and real datasets along with some recent state-of-the-art methods. The results, based on spatial and spectral performance criteria, show that the introduced strategy is very attractive and efficient in terms of spectral and spatial reconstructions. The new method outperforms the approaches tested in this paper. Besides, the HSB-SV method has the lowest processing time compared to the considered tested state-of-the-art methods, specifically compared to techniques dealing with spectral variability. These findings prove that using a spectral library appears very effective for the hypersharpening process. Indeed, the AEEB method allows one to construct a spectral library which considers different types of spectral variabilities present in the observed hyperspectral scenes which improves the spectral performance. Moreover, the HSB-SV method enables one to achieve a sufficient reconstruction while providing the lowest execution time of the benchmark. This is mainly due to the efficiency of the SUnSAL method. Moreover, the SUnSAL method yields a sufficient spatial reconstruction quality proving the attractiveness of sparse regression. Indeed, these findings show clearly that the HSB-SV method can handle a complex phenomenon like the spectral variability and still provides good satisfactory results while preserving computational complexity.
An interesting extension of this work may consist of developing techniques considering other spare regression methods to improve the obtained spatial reconstruction. Moreover, future work will focus on improving the efficiency of the AEEB method by reducing the number of pure elements needed to achieve a sufficient spectral reconstruction.

Author Contributions

Conceptualization, S.E.B. and Y.D.; methodology, S.E.B. and Y.D.; software, S.E.B.; validation, S.E.B. and Y.D.; formal analysis, S.E.B. and Y.D.; investigation, S.E.B. and Y.D.; resources, S.E.B. and Y.D.; data curation, S.E.B.; writing—original draft preparation, S.E.B. and Y.D.; writing—review and editing, S.E.B. and Y.D.; visualization, S.E.B. and Y.D.; supervision, Y.D.; project administration, Y.D.; funding acquisition, S.E.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Centre des Techniques Spatiales, which is part of the Algerian space agency, in particular Moussa Sofiane Karoui for providing the real dataset used for the conducted experiments. The authors would like to also thank the editorial board of Sensors for their invitation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef] [Green Version]
  2. Middleton, E.M.; Ungar, S.G.; Mandl, D.J.; Ong, L.; Frye, S.W.; Campbell, P.E.; Landis, D.R.; Young, J.P.; Pollack, N.H. The earth observing one (EO-1) satellite mission: Over a decade in space. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 243–256. [Google Scholar] [CrossRef]
  3. Pancorbo, J.L.; Quemada, M.; Roberts, D.A. Drought impact on cropland use monitored with AVIRIS imagery in Central Valley, California. Sci. Total Environ. 2023, 859, 160198. [Google Scholar] [CrossRef] [PubMed]
  4. Loizzo, R.; Guarini, R.; Longo, F.; Scopa, T.; Formaro, R.; Facchinetti, C.; Varacalli, G. Prisma: The Italian Hyperspectral Mission. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 175–178. [Google Scholar]
  5. Iwasaki, A.; Ohgi, N.; Tanii, J.; Kawashima, T.; Inada, H. Hyperspectral Imager Suite (HISUI)-Japanese hyper-multi spectral radiometer. In Proceedings of the IGARSS 2011—2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011. [Google Scholar]
  6. Stuffler, T.; Kaufmann, C.; Hofer, S.; Förster, K.; Schreier, G.; Mueller, A.; Eckardt, A.; Bach, H.; Penné, B.; Benz, U.; et al. The EnMAP hyperspectral imager—An advanced optical payload for future applications in Earth observation programmes. Acta Astronaut. 2007, 61, 115–120. [Google Scholar] [CrossRef]
  7. Blaaberg, S.; Løke, T.; Baarstad, I.; Fridman, A.; Koirala, P.A. Next generation VNIR-SWIR hyperspectral camera system: HySpex ODIN-1024. In Proceedings of the SPIE, Amsterdam, The Netherlands, 22–25 September 2014. [Google Scholar] [CrossRef]
  8. Thompson, D.R.; Boardman, J.W.; Eastwood, M.L.; Green, R.O.; Haag, J.M.; Mouroulis, P.; Gorp, B.V. Imaging spectrometer stray spectral response: In-flight characterization, correction, and validation. Remote Sens. Environ. 2018, 204, 850–886. [Google Scholar] [CrossRef]
  9. Schaepman, M.E.; Jehle, M.; Hueni, A.; D’Odorico, P.; Damm, A.; Weyermann, J.; Schneider, F.D.; Laurent, V.; Popp, C.; Seidel, F.C.; et al. Advanced radiometry measurements and Earth science applications with the Airborne Prism Experiment (APEX). Remote Sens. Environ. 2015, 158, 207–219. [Google Scholar] [CrossRef] [Green Version]
  10. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  11. Guillaume, M.; Minghelli, A.; Deville, Y.; Chami, M.; Juste, L.; Lenot, X.; Lafrance, B.; Jay, S.; Briottet, X.; Serfaty, V. Mapping Benthic Habitats by Extending Non-Negative Matrix Factorization to Address the Water Column and Seabed Adjacency Effects. Remote Sens. 2020, 12, 2072. [Google Scholar] [CrossRef]
  12. Deville, Y.; Brezini, S.E.; Benhalouche, F.Z.; Karoui, M.S.; Karoui, M.; Lenot, X.; Lafrance, B.; Chami, M.; Jay, S.; Minghelli, A.; et al. Hyperspectral Oceanic Remote Sensing with Adjacency Effects: From Spectral-Variability-Based Modeling to Performance of Associated Blind Unmixing Methods. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019. [Google Scholar] [CrossRef]
  13. Benhalouche, F.Z.; Benharrats, F.; Bouhlala, M.A.; Karoui, M.S. Spectral Unmixing Based Approach for Measuring Gas Flaring from VIIRS NTL Remote Sensing Data: Case of the Flare FIT-M8-101A-1U, Algeria. Remote Sens. 2022, 14, 2305. [Google Scholar] [CrossRef]
  14. Benhalouche, F.Z.; Karoui, M.S.; Benharrats, F.; Bouhlala, M.A. Improving Classical Approach for Flare Parameters Estimation from VIIRS NtL Remote Sensing Data by Linear and Nonlinear Spectral Unmixing Methods. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022. [Google Scholar] [CrossRef]
  15. Karoui, M.S.; Benhalouche, F.Z.; Deville, Y.; Djerriri, K.; Briottet, X.; Houet, T.; Le Bris, A.; Weber, C. Partial Linear NMF-Based Unmixing Methods for Detection and Area Estimation of Photovoltaic Panels in Urban Hyperspectral Remote Sensing Data. Remote Sens. 2019, 11, 2164. [Google Scholar] [CrossRef] [Green Version]
  16. Benhalouche, F.Z.; Benabbou, O.; Karoui, M.S.; Kebir, L.W.; Bennia, A.; Deville, Y. Minerals Detection and Mapping in the Southwestern Algeria Gara-Djebilet Region with a Multistage Informed NMF-Based Unmixing Approach Using Prisma Remote Sensing Hyperspectral Data. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022. [Google Scholar]
  17. Zhuang, L.; Fu, X.; Ng, M.K.; Bioucas-Dias, J.M. Hyperspectral Image Denoising Based on Global and Nonlocal Low-Rank Factorizations. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10438–10454. [Google Scholar] [CrossRef]
  18. Akhtar, N.; Shafait, F.; Mian, A. Sparse spatio-spectral representation for hyperspectral image super-resolution. In Proceedings of the European conference on computer vision, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
  19. Fu, X.; Jia, S.; Xu, M.; Zhou, J.; Li, Q. Sparsity Constrained Fusion of Hyperspectral and Multispectral Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  20. Selva, M.; Aiazzi, B.; Butera, F.; Chiarantini, L.; Baronti, S. Hyper-Sharpening: A First Approach on SIM-GA Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3008–3024. [Google Scholar] [CrossRef]
  21. Constans, Y.; Fabre, S.; Seymour, M.; Crombez, V.; Deville, Y.; Briottet, X. Hyperspectral Pansharpening in the Reflective Domain with a Second Panchromatic Channel in the SWIR II Spectral Domain. Remote Sens. 2022, 14, 113. [Google Scholar] [CrossRef]
  22. Hardie, R.C.; Eismann, M.T.; Wilson, G.L. MAP estimation for hyperspectral image resolution enhancement using an auxiliary sensor. IEEE Trans. Image Process. 2004, 13, 1174–1184. [Google Scholar] [CrossRef]
  23. Wei, Q.; Dobigeon, N.; Tourneret, J.Y. Bayesian fusion of hyperspectral and multispectral images. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014. [Google Scholar] [CrossRef] [Green Version]
  24. Wei, Q.; Bioucas-Dias, J.M.; Dobigeon, N.; Tourneret, J.Y. Hyperspectral and Multispectral Image Fusion Based on a Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3658–3668. [Google Scholar] [CrossRef] [Green Version]
  25. Selva, M.; Santurri, L.; Baronti, S. Improving Hypersharpening for WorldView-3 Data. IEEE Geosci. Remote Sens. Lett. 2019, 16, 987–991. [Google Scholar] [CrossRef]
  26. Kanatsoulis, C.I.; Fu, X.; Sidiropoulos, N.D.; Ma, W.K. Hyperspectral Super-Resolution: A Coupled Tensor Factorization Approach. IEEE Trans. Signal Process. 2018, 66, 6503–6517. [Google Scholar] [CrossRef] [Green Version]
  27. Li, S.; Dian, R.; Fang, L.; Bioucas-Dias, J.M. Fusing Hyperspectral and Multispectral Images via Coupled Sparse Tensor Factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef]
  28. Borsoi, R.A.; Prévost, C.; Usevich, K.; Brie, D.; Bermudez, J.C.M.; Richard, C. Coupled Tensor Decomposition for Hyperspectral and Multispectral Image Fusion With Inter-Image Variability. IEEE J. Sel. Top. Signal Process. 2021, 15, 702–717. [Google Scholar] [CrossRef]
  29. Prévost, C.; Usevich, K.; Comon, P.; Brie, D. Hyperspectral Super-Resolution with Coupled Tucker Approximation: Recoverability and SVD-Based Algorithms. IEEE Trans. Signal Process. 2020, 68, 931–946. [Google Scholar] [CrossRef] [Green Version]
  30. Prévost, C.; Borsoi, R.A.; Usevich, K.; Brie, D.; Bermudez, J.C.; Richard, C. Hyperspectral super-resolution accounting for spectral variability: Coupled tensor LL1-based recovery and blind unmixing of the unknown super-resolution image. SIAM J. Imaging Sci. 2022, 15, 110–138. [Google Scholar] [CrossRef]
  31. Zhang, X.; Huang, W.; Wang, Q.; Li, X. SSR-NET: Spatial–spectral reconstruction network for hyperspectral and multispectral image fusion. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5953–5965. [Google Scholar] [CrossRef]
  32. Xu, S.; Amira, O.; Liu, J.; Zhang, C.X.; Zhang, J.; Li, G. HAM-MFN: Hyperspectral and multispectral image multiscale fusion network with RAP loss. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4618–4628. [Google Scholar] [CrossRef]
  33. Zhang, H.; Xu, H.; Tian, X.; Jiang, J.; Ma, J. Image fusion meets deep learning: A survey and perspective. Inf. Fusion 2021, 76, 323–336. [Google Scholar] [CrossRef]
  34. Xie, Q.; Zhou, M.; Zhao, Q.; Xu, Z.; Meng, D. MHF-Net: An interpretable deep network for multispectral and hyperspectral image fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1457–1473. [Google Scholar] [CrossRef] [PubMed]
  35. Xiong, Y.; Guo, S.; Chen, J.; Deng, X.; Sun, L.; Zheng, X.; Xu, W. Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors. Remote Sens. 2020, 12, 1263. [Google Scholar] [CrossRef] [Green Version]
  36. Li, W.; Fan, L.; Wang, Z.; Ma, C.; Cui, X. Tackling mode collapse in multi-generator GANs with orthogonal vectors. Pattern Recognit. 2021, 110, 107646. [Google Scholar] [CrossRef]
  37. Shi, Y.; Han, L.; Han, L.; Chang, S.; Hu, T.; Dancey, D.A. latent encoder coupled generative adversarial network (le-gan) for efficient hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–19. [Google Scholar]
  38. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Trans. Geosci. Remote Sens. 2011, 50, 528–537. [Google Scholar] [CrossRef]
  39. Karoui, M.S.; Deville, Y.; Kreri, S. Joint nonnegative matrix factorization for hyperspectral and multispectral remote sensing data fusion. In Proceedings of the 2013 5th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Gainesville, FL, USA, 26–28 June 2013. [Google Scholar]
  40. Karoui, M.S.; Deville, Y.; Benhalouche, F.Z.; Boukerch, I. Hypersharpening by joint-criterion nonnegative matrix factorization. IEEE Trans. Geosci. Remote Sens. 2016, 55, 1660–1670. [Google Scholar] [CrossRef]
  41. Simoes, M.; Bioucas-Dias, J.; Almeida, L.B.; Chanussot, J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens. 2014, 53, 3373–3388. [Google Scholar] [CrossRef] [Green Version]
  42. Comon, P.; Jutten, C. Handbook of Blind Source Separation: Independent Component Analysis and Applications; Academic Press: Oxford, UK, 2010. [Google Scholar]
  43. Deville, Y. Blind Source Separation and Blind Mixture Identification Methods. In Wiley Encyclopedia of Electrical and Electronics Engineering; Webster, J., Ed.; Wiley: Hoboken, NJ, USA, 2016; pp. 1–33. [Google Scholar]
  44. Cichocki, A.; Zdunek, R.; Phan, A.H.; Amari, S.I. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation; John Wiley and Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  45. Somers, B.; Asner, G.P.; Tits, L.; Coppin, P. Endmember variability in spectral mixture analysis: A review. Remote Sens. Environ. 2011, 115, 1603–1616. [Google Scholar] [CrossRef]
  46. Zare, A.; Ho, K.C. Endmember variability in hyperspectral analysis: Addressing spectral variability during spectral unmixing. IEEE Signal Process. Mag. 2013, 31, 95–104. [Google Scholar] [CrossRef]
  47. Borsoi, R.A.; Imbiriba, T.; Bermudez, J.C.M.; Richard, C.; Chanussot, J.; Drumetz, L.; Tourneret, J.-Y.; Zare, A.; Jutten, C. Spectral variability in hyperspectral data unmixing: A comprehensive review. IEEE Geosci. Remote Sens. Mag. 2021, 9, 223–270. [Google Scholar] [CrossRef]
  48. Brezini, S.E.; Karoui, M.S.; Benhalouche, F.Z.; Deville, Y.; Ouamri, A. A pixel-by-pixel NMF-based method for hyperspectral unmixing using a new linear mixing model to address additively-tuned spectral variability. In Proceedings of the Image and Signal Processing for Remote Sensing SPIE, Strasbourg, France, 9–12 September 2019. [Google Scholar]
  49. Brezini, S.E.; Karoui, M.S.; Benhalouche, F.Z.; Deville, Y.; Ouamri, A. An NMF-Based Method For Hyperspectral Unmixing Using A Structured Additively-Tuned Linear Mixing Model To Address Spectral Variability. In Proceedings of the 2020 Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Tunis, Tunisia, 9–11 March 2020. [Google Scholar]
  50. Brezini, S.E.; Deville, Y.; Karoui, M.S.; Benhalouche, F.Z.; Ouamri, A. A Penalization-Based NMF Approach for Hyperspectral Unmixing Addressing Spectral Variability with an Additively-Tuned Mixing Model. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021. [Google Scholar]
  51. Karoui, M.S.; Benhalouche, F.Z.; Deville, Y. A Gradient-Based Method for the Modified Augmented Linear Mixing Model Addressing Spectral Variability for Hyperspectral Unmixing. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022. [Google Scholar]
  52. Karoui, M.S.; Benhalouche, F.Z.; Deville, Y. Hyperspectral Unmixing with a Modified Augmented Linear Mixing Model Addressing Spectral Variability. In Proceedings of the 2022 IEEE Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Istanbul, Turkey, 7–9 March 2022. [Google Scholar]
  53. Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. An augmented linear mixing model to address spectral variability for hyperspectral unmixing. IEEE Trans. Image Process. 2018, 28, 1923–1938. [Google Scholar] [CrossRef] [Green Version]
  54. Dobigeon, N.; Tourneret, J.Y.; Richard, C.; Bermudez, J.C.M.; McLaughlin, S.; Hero, A.O. Nonlinear unmixing of hyperspectral images: Models and algorithms. IEEE Signal Process. Mag. 2013, 31, 82–94. [Google Scholar] [CrossRef] [Green Version]
  55. Meganem, I.; Deville, Y.; Hosseini, S.; Deliot, P.; Briottet, X. Linear-quadratic blind source separation using NMF to unmix urban hyperspectral images. IEEE Trans. Signal Process. 2014, 62, 1822–1833. [Google Scholar] [CrossRef]
  56. Meganem, I.; Déliot, P.; Briottet, X.; Deville, Y.; Hosseini, S. Linear–quadratic mixing model for reflectances in urban environments. IEEE Trans. Geosci. Remote Sens. 2013, 52, 544–558. [Google Scholar] [CrossRef]
  57. Benhalouche, F.Z.; Deville, Y.; Karoui, M.S.; Ouamri, A. Hyperspectral Unmixing Based on Constrained Bilinear or Linear-Quadratic Matrix Factorization. Remote Sens. 2021, 13, 2132. [Google Scholar] [CrossRef]
  58. Borsoi, R.A.; Imbiriba, T.; Bermudez, J.C.M. Super-resolution for hyperspectral and multispectral image fusion accounting for seasonal spectral variability. IEEE Trans. Image Process. 2019, 29, 116–127. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Brezini, S.E.; Karoui, M.S.; Benhalouche, F.Z.; Deville, Y.; Ouamri, A. Hypersharpening by an NMF-Unmixing-Based Method Addressing Spectral Variability. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  60. Camacho, A.; Vargas, E.; Arguello, H. Hyperspectral and multispectral image fusion addressing spectral variability by an augmented linear mixing model. Int. J. Remote Sens. 2022, 43, 1577–1608. [Google Scholar] [CrossRef]
  61. Somers, B.; Zortea, M.; Plaza, A.; Asner, G.P. Automated extraction of image-based endmember bundles for improved spectral unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 396–408. [Google Scholar] [CrossRef] [Green Version]
  62. Xu, M.; Zhang, L.; Du, B. An image-based endmember bundle extraction algorithm using both spatial and spectral information. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2607–2617. [Google Scholar] [CrossRef]
  63. Bioucas-Dias, J.M.; Figueiredo, M.A. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Reykjavik, Iceland, 14–16 June 2010. [Google Scholar]
  64. Imbiriba, T.; Borsoi, R.A.; Bermudez, J.C.M. Generalized linear mixing model accounting for endmember variability. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018. [Google Scholar]
  65. Revel, C.; Deville, Y.; Achard, V.; Briottet, X.; Weber, C. Inertia-constrained pixel-by-pixel nonnegative matrix factorisation: A hyperspectral unmixing method dealing with intra-class variability. Remote Sens. 2018, 10, 1706. [Google Scholar] [CrossRef] [Green Version]
  66. Heinz, D.; Chang, C. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 529–545. [Google Scholar] [CrossRef] [Green Version]
  67. Karoui, M.S.; Benhalouche, F.Z.; Brezini, S.E.; Deville, Y.; Benkouider, Y.K. Hypersharpening by a Multiplicative Joint-Criterion NMF Method Addressing Spectral Variability. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021. [Google Scholar]
  68. Drumetz, L.; Veganzones, M.A.; Henrot, S.; Phlypo, R.; Chanussot, J.; Jutten, C. Blind hyperspectral unmixing using an extended linear mixing model to address spectral variability. IEEE Trans. Image Process. 2016, 25, 3890–3905. [Google Scholar] [CrossRef] [Green Version]
  69. Otazu, X.; González-Audícana, M.; Fors, O.; Núñez, J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef] [Green Version]
  70. Drumetz, L.; Meyer, T.R.; Chanussot, J.; Bertozzi, A.L.; Jutten, C. Hyperspectral image unmixing with endmember bundles and group sparsity inducing mixed norms. IEEE Trans. Image Process. 2019, 28, 3435–3450. [Google Scholar] [CrossRef]
  71. Uezato, T.; Fauvel, M.; Dobigeon, N. Hyperspectral unmixing with spectral variability using adaptive bundles and double sparsity. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3980–3992. [Google Scholar] [CrossRef] [Green Version]
  72. Nascimento, J.M.; Dias, J.M. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef] [Green Version]
  73. Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; van Kasteren, T.; Liao, W.; Bellens, R.; Pizurica, A.; Gautama, S.; et al. Hyperspectral and LiDAR data fusion: Outcome of the 2013 GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2405–2418. [Google Scholar] [CrossRef]
  74. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  75. Correa, Y.T.S.; Bovolo, F.; Bruzzone, L. Change detection in very high resolution multisensor images. In Proceedings of the Image and Signal Processing for Remote Sensing XX SPIE, Amsterdam, The Netherlands, 22–25 September 2014. [Google Scholar]
  76. EO-1 (Earth Observing-1). Available online: https://earth.esa.int/web/eoportal/satellite-missions/e/eo-1 (accessed on 12 January 2023).
  77. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and multispectral data fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  78. Alparone, L.; Aiazzi, B.; Baronti, S.; Garzelli, A.; Nencini, F.; Selva, M. Multispectral and panchromatic data fusion assessment without reference. Photogramm. Eng. Remote Sens. 2008, 74, 193–200. [Google Scholar] [CrossRef] [Green Version]
Figure 1. True-color image composite for the synthetic dataset. (a) Original Hyperspectral image; (b) Low-spectral-resolution multispectral image; (c) Low-spatial-resolution hyperspectral image.
Figure 1. True-color image composite for the synthetic dataset. (a) Original Hyperspectral image; (b) Low-spectral-resolution multispectral image; (c) Low-spatial-resolution hyperspectral image.
Sensors 23 02341 g001
Figure 2. True-color image composite for the real dataset. (a) Low-spatial-resolution hyperspectral image; (b) High-spatial-resolution pansharpened multispectral image.
Figure 2. True-color image composite for the real dataset. (a) Low-spatial-resolution hyperspectral image; (b) High-spatial-resolution pansharpened multispectral image.
Sensors 23 02341 g002
Figure 3. Extracted spectral library from the synthetic data by AEEB.
Figure 3. Extracted spectral library from the synthetic data by AEEB.
Sensors 23 02341 g003
Figure 4. Band-wise PSNR for the synthetic dataset.
Figure 4. Band-wise PSNR for the synthetic dataset.
Sensors 23 02341 g004
Figure 5. True-color image composite for the synthetic dataset. (a) Original hyperspectral image; (b) Obtained HSB-SV sharpened hyperspectral image; (c) Obtained HMF-IPNMF sharpened hyperspectral image; (d) Obtained HySure sharpened hyperspectral image; (e) Obtained CNMF sharpened hyperspectral image; (f) Obtained FuVar sharpened hyperspectral image.
Figure 5. True-color image composite for the synthetic dataset. (a) Original hyperspectral image; (b) Obtained HSB-SV sharpened hyperspectral image; (c) Obtained HMF-IPNMF sharpened hyperspectral image; (d) Obtained HySure sharpened hyperspectral image; (e) Obtained CNMF sharpened hyperspectral image; (f) Obtained FuVar sharpened hyperspectral image.
Sensors 23 02341 g005
Figure 6. Spectral band in the 0.850   μ m region (a) Original hyperspectral image; (b) Obtained HSB-SV sharpened hyperspectral image; (c) Obtained HMF-IPNMF sharpened hyperspectral image; (d) Obtained HySure sharpened hyperspectral image; (e) Obtained CNMF sharpened hyperspectral image; (f) Obtained FuVar sharpened hyperspectral image.
Figure 6. Spectral band in the 0.850   μ m region (a) Original hyperspectral image; (b) Obtained HSB-SV sharpened hyperspectral image; (c) Obtained HMF-IPNMF sharpened hyperspectral image; (d) Obtained HySure sharpened hyperspectral image; (e) Obtained CNMF sharpened hyperspectral image; (f) Obtained FuVar sharpened hyperspectral image.
Sensors 23 02341 g006
Figure 7. Spectral library extracted from the real data by AEEB.
Figure 7. Spectral library extracted from the real data by AEEB.
Sensors 23 02341 g007
Figure 8. True-color image composite for fusion products derived for the real dataset. (a) Obtained HSB-SV sharpened hyperspectral image; (b) Obtained HMF-IPNMF sharpened hyperspectral image; (c) Obtained HySure sharpened hyperspectral image; (d) Obtained CNMF sharpened hyperspectral image; (e) Obtained FuVar sharpened hyperspectral image.
Figure 8. True-color image composite for fusion products derived for the real dataset. (a) Obtained HSB-SV sharpened hyperspectral image; (b) Obtained HMF-IPNMF sharpened hyperspectral image; (c) Obtained HySure sharpened hyperspectral image; (d) Obtained CNMF sharpened hyperspectral image; (e) Obtained FuVar sharpened hyperspectral image.
Sensors 23 02341 g008
Figure 9. Spectral band in the 0.854   μ m region. (a) Obtained HSB-SV sharpened hyperspectral image; (b) Obtained HMF-IPNMF sharpened hyperspectral image; (c) Obtained HySure sharpened hyperspectral image; (d) Obtained CNMF sharpened hyperspectral image; (e) Obtained FuVar sharpened hyperspectral image.
Figure 9. Spectral band in the 0.854   μ m region. (a) Obtained HSB-SV sharpened hyperspectral image; (b) Obtained HMF-IPNMF sharpened hyperspectral image; (c) Obtained HySure sharpened hyperspectral image; (d) Obtained CNMF sharpened hyperspectral image; (e) Obtained FuVar sharpened hyperspectral image.
Sensors 23 02341 g009
Table 1. Spectral bands of the QuickBird Sensor [75].
Table 1. Spectral bands of the QuickBird Sensor [75].
Quick Bird

Spectral   Bands   ( μ m )
0.45 0.52
0.52 0.60
0.63 0.69
0.76 0.90
Table 2. Spectral bands of the EO-1 Advanced Land Imager Sensor [76].
Table 2. Spectral bands of the EO-1 Advanced Land Imager Sensor [76].
EO-1 Advanced Land Imager Spectral   Bands   ( μ m )
0.433 0.453
0.450 0.515
0.525 0.605
0.630 0.690
0.775 0.805
0.845 0.890
1.200 1.300
1.550 1.750
2.080 2.350
Table 3. Regularization parameters for the HMF-IPNMF, HySure and FuVar methods.
Table 3. Regularization parameters for the HMF-IPNMF, HySure and FuVar methods.
Regularization Parameters
HMF-IPNMF μ = 30
HySure λ m = 1 and λ φ = 10 3
FuVar λ m = 1 , λ A =   10 4 , λ 1 = 0.01 and λ 2 = 10,000
Table 4. Considered parameters for HSB-SV.
Table 4. Considered parameters for HSB-SV.
Experiment Settings for HSB-SV
Number of classes of pure materials7
Number of subsets5
Size of Subsets10%
Sparsity prompting parameter λ (SUnSAL) 5 × 10 4
Table 5. Performance criteria for the synthetic dataset.
Table 5. Performance criteria for the synthetic dataset.
HSB-SVHMF-IPNMFFuVarHySureCNMF
SAM (°)2.653.533.753.634.34
NMSEλ (%)7.497.9214.428.7911.89
NMSEs (%)6.768.5215.969.6213.73
PSNR (dB)43.0140.6134.1238.7335.50
UIQI0.97280.96270.90980.96520.9402
ERGAS4.965.7710.266.188.93
Table 6. Time processing of the tested methods (in seconds) for the synthetic dataset.
Table 6. Time processing of the tested methods (in seconds) for the synthetic dataset.
HSB-SVHMF-IPNMFFuVarHySureCNMF
0.74465.98363.6412.893.20
Table 7. Considered parameters for HSB-SV for the real dataset.
Table 7. Considered parameters for HSB-SV for the real dataset.
Experiment Settings for HSB-SV
Number of pure materials7
Number of subsets5
Size of Subsets10%
Sparsity prompting parameter λ (SUnSAL) 2 × 10 4
Table 8. Performance criteria for the real dataset.
Table 8. Performance criteria for the real dataset.
HSB-SVHMF-IPNMFFuVarHySureCNMF
D λ 0.03220.03350.04850.04420.1243
D s 0.00640.01190.02380.00980.0863
m Q N R 0.96150.95490.92880.94640.8000
Table 9. Time processing for each method for the real dataset (in seconds).
Table 9. Time processing for each method for the real dataset (in seconds).
HSB-SVHMF-IPNMFFuVarHySureCNMF
Time (s)0.20461.91238.9912.021.63
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brezini, S.E.; Deville, Y. Hyperspectral and Multispectral Image Fusion with Automated Extraction of Image-Based Endmember Bundles and Sparsity-Based Unmixing to Deal with Spectral Variability. Sensors 2023, 23, 2341. https://doi.org/10.3390/s23042341

AMA Style

Brezini SE, Deville Y. Hyperspectral and Multispectral Image Fusion with Automated Extraction of Image-Based Endmember Bundles and Sparsity-Based Unmixing to Deal with Spectral Variability. Sensors. 2023; 23(4):2341. https://doi.org/10.3390/s23042341

Chicago/Turabian Style

Brezini, Salah Eddine, and Yannick Deville. 2023. "Hyperspectral and Multispectral Image Fusion with Automated Extraction of Image-Based Endmember Bundles and Sparsity-Based Unmixing to Deal with Spectral Variability" Sensors 23, no. 4: 2341. https://doi.org/10.3390/s23042341

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop