A ROBUST ALTERNATING VOLUME MAXIMIZATION ALGORITHM FOR ENDMEMBER
EXTRACTION IN HYPERSPECTRAL IMAGES
ArulMurugan Ambikapathi†, Tsung-Han Chan† , Wing-Kin Ma†∗ and Chong-Yung Chi†
†
Inst. Commun. Eng., National Tsing Hua Univ.
Hsinchu, Taiwan
E-mail: (tsunghan@mx,cychi@ee).nthu.edu.tw
ABSTRACT
Accurate estimation of endmember signatures and the associated
abundances of a scene from its hyperspectral observations is at
present, a challenging research area. Many of the existing hyperspectral unmixing algorithms are based on Winter’s belief, which
states that the vertices of the maximum volume simplex inside
the data cloud (observations) will yield high fidelity estimates of
the endmember signatures if pure-pixels exist. Based on Winter’s
belief, we recently proposed a convex analysis based alternating volume maximization (AVMAX) algorithm. In this paper we develop
a robust version of the AVMAX algorithm. Here, the presence of
noise in the hyperspectral observations is taken into consideration
with the original deterministic constraints suitably reformulated as
probabilistic constraints. The subproblems involved are convex
problems and they can be effectively solved using available convex
optimization solvers. Monte Carlo simulations are presented to
demonstrate the efficacy of the proposed RAVMAX algorithm over
several existing pure-pixel based hyperspectral unmixing methods,
including its predecessor, the AVMAX algorithm.
Index Terms— Hyperspectral unmixing, Convex analysis,
Chance constraints, Second-order cone program
1. INTRODUCTION
A hyperspectral sensor records the electromagnetic scattering patterns of distinct materials over hundreds of spectral bands that range
from visible to near-infrared wavelength region. The limited spatial
resolution of the sensor used for hyperspectral imaging demands an
effective hyperspectral unmixing (HU) scheme to extract the underlying endmember signatures (or simply endmembers) and the associated abundance maps distributed over a scene of interest [1]. Conventional HU algorithms based on a linear mixing model (to be explained later) are based on the assumption that in a given set of hyperspectral observations, there exists a pure-pixel for each endmember, namely, pixels that are fully contributed by a single endmember.
HU algorithms based on the pure-pixel assumption includes pixel
purity index (PPI) [2], N-finder (N-FINDR) [3] and vertex component analysis (VCA) [4]. A recent addition to this group is the AVMAX [5] algorithm, which is a convex analysis based optimization
algorithm based on Winter’s belief. It employs an alternating linear
programming approach to solve the optimization problem. Nevertheless, the performance of the above mentioned algorithms is degraded when the observations are corrupted by noise.
This work was supported by the National Science Council (R.O.C.) under Grant NSC 96-2628-E-007-003-MY3, and by General Research Funds
of Hong Kong Research Grant Council (Project Nos. CUHK415509,
CUHK415908).
978-1-4244-8907-7/10/$26.00 ©2010 IEEE
∗
Dept. Electronic Eng., Chinese Univ. Hong Kong
Shatin, N.T., Hong Kong
E-mail: wkma@ieee.org
In this paper, we extend our AVMAX [5] algorithm to a robust AVMAX (RAVMAX) algorithm that accounts for the noise effects by employing chance constraints. We first reformulate the AVMAX subproblems into equivalent problems to which the chance
constraint can be suitably applied, and then reformulate them as
second-order cone problems. Hence, the chance constrained problem can be efficiently solved by any convex optimization solvers in
an alternating fashion. We finally show some Monte-Carlo simulations to demonstrate the efficacy of the proposed RAVMAX algorithm, in comparison with the conventional pure-pixel based algorithms, including its predecessor.
In the remainder of the paper, the following notations are employed. RM and RM ×N represent the sets of all real M × 1 vectors
and M × N matrices, respectively. The symbol denotes componentwise inequality. 1N , IM and 0 respectively represent the N × 1
all one vector, the M × M identity matrix and an all-zero vector of
proper dimension. A Gaussian distribution with mean vector µ and
covariance matrix Σ is denoted as N (µ, Σ). The notation sign(b)
denotes a vector whose elements are the signs of the elements in the
vector b, |b| denotes a column vector whose elements are the absolute values of the individual elements in b, and diag(b) denotes
a diagonal matrix with the elements of b as its diagonal elements.
The symbol [a]i and Aij denote the ith element of the vector a and
(i, j)th element of matrix A, respectively. Finally, the symbol [A]i,:
corresponds to the ith row vector of A.
2. SIGNAL MODEL AND ASSUMPTIONS
Consider a scenario in which a hyperspectral sensor measures solar
electromagnetic radiation from N distinct substances. Each pixel of
the hyperspectral images measured over M spectral bands can be
represented by the following M × N linear mixing model [1, 3–6]
y[n] = x[n] + w[n],
x[n] = As[n] =
N
si [n]ai , ∀n = 1, . . . , L.
(1)
(2)
i=1
In (1), y[n] = [ y1 [n], . . . , yM [n] ]T represents the nth observed pixel vector comprising M spectral bands and x[n] =
[ x1 [n], . . . , xM [n] ]T corresponds to its noise-free counterpart.
The noise vector, w[n] = [ w1 [n], . . . , wM [n] ]T in (1), is zeromean, uniform white Gaussian noise vector (i.e., N (0, D), where
D = σ 2 IM and σ denotes the standard deviation of the noise). In
(2), A = [ a1 , . . . , aN ] ∈ RM ×N denotes the endmember signature matrix with the ith column vector ai being the ith endmember
signature, s[n] = [ s1 [n], . . . , sN [n] ]T ∈ RN is the nth abundance
vector comprising N fractional abundances and L is the total number of observed pixels. Assuming prior knowledge of the number of
endmembers N , we aim to estimate the endmember signature matrix
A and the abundances s[1], . . . , s[L] from the given noisy pixels
y[1], . . . , y[L], under the following general assumptions [5, 6]:
(A1) (Non-negativity condition) si [n] ≥ 0 ∀ i, n.
(A2) (Full additivity condition) N
i=1 si [n] = 1 ∀ n.
(A3) min{L, M } ≥ N and A is of full column rank.
(A4) There exists an index set {ℓ1 , ℓ2 , . . . , ℓN }, such that x[ℓi ] =
ai for i = 1, . . . , N (i.e., the pure-pixel assumption).
For ease of later use, the convex hull [7] of the vectors a1 , . . . , aN ∈
RM is defined as
N
T
conv{a1 , . . . , aN } = x =
θi ai 1N θ = 1, θ 0 , (3)
i=1
T
where θ = [θ1 , . . . , θN ] . A convex hull, conv{a1 , . . . , aN } is
called an (N − 1)-dimensional simplex in RM if {a1 , . . . , aN } ⊂
RM is affinely independent.
3. REVIEW OF AVMAX ALGORITHM
Lemma 1. (Dimension reduction by affine set fitting [5, 6]) Under
(A2) and (A3), a dimension-reduced pixel vector x̃[n] can be obtained by an affine transformation of x[n]:
where (C, d) is the affine set fitting solution given by
L
1
d=
x[n],
L n=1
C = [ q1 (UUT ), q2 (UUT ), . . . , qN−1 (UUT ) ],
|det(∆(ν1 , . . . , νN ))|
max
νi ∈RN −1
θ1 ,...,θN ∈RL
(10)
s.t. νi = X̃θi , θi 0,
(4)
x̃[n] ∈ conv{γ1 , . . . , γN },
(7)
(11)
N−1
The partial maximization problem (12) can be decomposed into the
following two linear programs:
max
νj ∈RN −1 ,θj ∈RL
bTj νj + (−1)N+j det(V Nj )
(13)
s.t. νj = X̃θj , θj 0, 1TL θj = 1,
q⋆ =
min
νj ∈RN −1 ,θj ∈RL
bTj νj + (−1)N+j det(V Nj )
(14)
s.t. νj = X̃θj , θj 0, 1TL θj = 1.
M ×L
The affine set fitting solution (Ĉ, d̂) for noisy observations is obtained by replacing x[n] in (5) and (6) with y[n]. In the noisy case,
(Ĉ, d̂) serves as a best least-squares approximation to true (C, d)
and the former asymptotically approaches the latter for large L. By
(2), (4) and under (A1)-(A3), it has been proved in [8] that
det(V ij )]N−1
i=1
where bj = [(−1)
∈R
and the term V ij ∈
R(N−1)×(N−1) is a submatrix of ∆(ν1 , . . . , νN ) with the ith row
and jth column removed. We then consider the partial maximization
of (10) with respect to νj and θj , while fixing νi and θi for all i = j.
The problem (10) then becomes
T
N+j
det(V Nj )
max
bj νj + (−1)
N −1
L
νj ∈R
,θj ∈R
(12)
s.t. νj = X̃θj , θj 0, 1TL θj = 1.
(5)
where U = [ x[1] − d, . . . , x[L] − d ] ∈ R
, and qi (R) denotes the orthonormal eigenvector associated with the ith principal
eigenvalue of R.
= 1 ∀ i.
det(∆(ν1 , . . . , νN )) = bTj νj + (−1)N+j det(V Nj ),
p⋆ =
(6)
1TL θi
Though the constraints of (10) are convex, the non-convexity of the
objective function makes the problem difficult to solve. The problem may be handled in a convenient manner by the idea of cofactor
expansion and alternating optimization. The cofactor expansion of
the objective function in (10) along the jth column is given by
i+j
As in many other HU algorithms, we begin with the dimension reduction of the observations. In our work, we employ the affine set
fitting procedure in [5] to perform dimension reduction. To begin
with, we start with the noise-free signal model, given by (2). The
affine set fitting procedure is summarized as follows:
x̃[n] = CT (x[n] − d) ∈ RN−1 ,
|det (∆(ν1 , . . . , νN ))|
,
(9)
(N − 1)!
where
ν1 · · · νN
.
∆(ν1 , . . . , νN ) =
1 ···
1
(N−1)×L
By letting X̃ = [ x̃[1], . . . , x̃[L] ] ∈ R
and by (3), problem
(8) can be expressed as
V (ν1 , . . . , νN ) =
The optimal solution of (12) is that of (13) if |p⋆ | > |q ⋆ |, and that
of (14) if |q ⋆ | > |p⋆ |. This procedure of alternating optimization
is performed for all the N columns (one iteration) and the relative
change in the volume of the updated ∆(ν1 , . . . , νN ) is compared
with a given threshold. If it exceeds the threshold, we continue with
the next updating iteration, else we conclude that the current updated
νj s are optimum. Once the optimal solution of (10), denoted by
⋆
is obtained, the endmember estimates can be recovered
ν1⋆ , . . . , νN
by using, âi = Cν̂i + d (by virtue of (4)) for all i. Next, we aim to
make AVMAX more robust against noise effects.
T
where γj = C (aj − d) ∀j = 1, . . . , N correspond to dimension
reduced endmembers.
Now, the main problem is how we estimate γ1 , . . . , γN from
x̃[1], . . . , x̃[L]. Winter [3] proposed a belief that under (A4) the vertices of the maximum volume simplex inside the data cloud (observations) yield high fidelity estimates of the endmember signatures.
Based on that, the unmixing problem [5] can be written as:
max
ν1 ,...,νN ∈RN −1
V (ν1 , . . . , νN )
(8)
s.t. νi ∈ conv{x̃[1], . . . , x̃[L]}, ∀ i,
where V (ν1 , . . . , νN ) is the volume of the (N − 1)-dimensional
simplex conv{ν1 , . . . , νN } in RN−1 and is given by [9],
4. ROBUST AVMAX FORMULATION AND ALGORITHM
In this section, we first do some reformulation to (13) and (14) so that
chance constraints can be incorporated into the unmixing problem.
Then, we move on to develop a robust version of AVMAX.
4.1. Restructuring the AVMAX algorithm
Now, let B = diag(sign(bj )) and G = −B. Then, we can have
GG = BB = IN−1 , bTj B = |bj |T and bTj G = −|bj |T . The
subproblems (13) and (14) can then be equivalently written as:
p⋆ =
max
νj ∈RN −1 ,θj ∈RL
bTj BBνj + (−1)N+j det(V Nj )
s.t. Bνj = BX̃θj , θj 0, 1TL θj = 1,
(15)
q⋆ =
min
νj ∈RN −1 ,θj ∈RL
bTj GGνj + (−1)N+j det(V Nj )
(16)
s.t. Gνj = GX̃θj , θj 0, 1TL θj = 1.
Then, by change of variables, we let
αj = Bνj
(17)
βj = Gνj .
max
αj ∈RN −1 ,θj ∈RL
|bj |T αj + (−1)N+j det(V Nj )
(19)
s.t. αj BX̃θj , θj 0,
q =
min
T
βj ∈RN −1 ,θj ∈RL
− |bj | βj + (−1)
1TL θj
N+j
= 1,
−1
(1
2Φ
− η) ≥ [βj ]i − [G]i,: Ỹθj .
max
αj ∈RN −1 ,θj ∈RL
s.t.
Qii θj
|bTj |αj + (−1)N+j det(V Nj )
−1
(1
2Φ
− η) ≥ [αj ]i − [B]i,: Ỹθj ,
θj 0, 1TL θj = 1,
q⋆ =
min
βj ∈RN −1 ,θj ∈RL
s.t.
Qii θj
(20)
s.t. βj GX̃θj , θj 0, 1TL θj = 1.
Although we relax the first constraints of the subproblems, we still
can show that the optimal solutions of (19) and (20) are equivalent to
that of (13) and (14), respectively, as proved in the following lemma:
Lemma 2. (Equivalence of subproblems) The subproblems (19)
and (20) are equivalent to (13) and (14), respectively.
The proof of Lemma 2 is given in Appendix.
4.2. Robust AVMAX Algorithm
Now we consider the unmixing problem with noisy observations
given by (1). Substituting (1) into (4), we get the following dimension reduced observations
T
ỹ[n] ĈT (y[n] − d̂) ∼
(21)
= x̃[n] + Ĉ w[n],
(27)
∀ i = 1, 2, . . . , N − 1.
− |bTj |βj + (−1)N+j det(V Nj )
−1
(1
2Φ
− η) ≥ [βj ]i − [G]i,: Ỹθj ,
θj 0, 1TL θj = 1,
det(V Nj )
(26)
for all i = 1, . . . , N −1. By replacing the first constraints of (19) and
(20) with (25) and (26), respectively, the robust AVMAX problem
can then be written as:
(18)
To facilitate the application of chance constraints to the AVMAX
problem, we relax the first equality constraints of (15) and (16) and
thus the corresponding subproblems are given as below:
⋆
Qii θj
p⋆ =
and
p⋆ =
and
(28)
∀ i = 1, 2, . . . , N − 1.
The values of η affect the feasible sets of (27) and (28); specifically, their convexity. The following are the three possible cases:
When η > 0.5 (i.e., Φ−1 (1 − η) < 0), the first constraints of both
subproblems are second-order cone constraints and hence subproblems (27) and (28) are convex. If η = 0.5 (i.e., Φ−1 (1−η) = 0), the
first constraints of both subproblems reduce to those of the original
AVMAX problem (as in (13) and (14)), i.e., the constraints become
linear (convex). Finally, if η < 0.5 (i.e., Φ−1 (1 − η) > 0), the
constraints become non-convex. The effect of η is illustrated in Figure 1. From our extensive numerical experiences we found that for
satisfactory performance, the η value should lie between 0.9 and 1,
in which case the subproblems (27) and (28) are convex.
η < 0.5
η = 0.5
η > 0.5
where ĈT w[n] is a random vector following N (0, ĈT DĈ). In matrix form (considering all the pixels, n = 1, . . . , L), we can write the
above equation as: Ỹ = X̃ + ĈT W, where Ỹ = [ỹ[1], . . . , ỹ[L]],
X̃ = [x̃[1], . . . , x̃[L]], and W = [w[1], . . . , w[L]]. The first inequality constraint in (19) now becomes:
αj B(Ỹ − ĈT W)θj = BỸθj + zj ,
(22)
where zj −BĈT Wθj ∼ N (0, BĈT DĈBT θj 22 ).
Since the noise-induced vector zj is random and unknown, our
approach is to replace (22) with chance constraints, as shown below
Pr{[αj ]i − [B]i,: Ỹθj ≤ [zj ]i } ≥ η, ∀ i = 1, . . . , N − 1, (23)
where 0 < η < 1 is a design parameter. A similar equation can be
written for the first inequality constraint of (20), i.e.,
Pr{[βj ]i − [G]i,: Ỹθj ≤ [zj ]i } ≥ η, ∀ i = 1, . . . , N − 1. (24)
The second-order cone equivalence of a chance constraint
has been discussed in [7]. Specifically, for a random variable
ε ∼ N (µ, δ 2 ) and t ∈ R, one can show that Pr(ε ≤ t) ≥ η is true
as t ≥ δΦ−1 (η) + µ, where Φ−1 is the inverse of the cumulative
distribution function of the standard normal random variable.
By letting Q = BĈT DĈBT ∈ R(N−1)×(N−1) and applying
the above mentioned chance constraint procedure to (23) and (24),
we have
Qii θj
−1
(1
2Φ
− η) ≥ [αj ]i − [B]i,: Ỹθj ,
(25)
Fig. 1. Illustration of the effect of η in RAVMAX for N = 3.
Some more technical aspects are discussed as follows. The subproblems (27) and (28) are solved in an alternating fashion, similar to the original AVMAX explained in Section 3, except for the
following difference: after each execution of the subproblems, the
corresponding νj = Bαj (by (17)) is obtained if |p⋆ | > |q ⋆ |, else
νj = Gβj (by (18)) is obtained. The proposed RAVMAX algorithm uses any N pixel vectors in X̃ for its initialization. As the
subproblems are convex (for our desired choice, η > 0.5), they can
be solved effectively by using available convex optimization solvers.
5. SIMULATIONS
This section demonstrates the efficacy of the proposed RAVMAX
through comparison with other pure-pixel based HU algorithms. The
algorithms considered are N-FINDR [3], VCA [4] and AVMAX [5].
Table 1. Average φen and φab (degrees) over the various unmixing methods for different purity levels (ρ) and SNRs.
Methods
N-FINDR
VCA
AVMAX
RAVMAX
(0.9 < η < 1)
ρ
0.7
0.85
1
0.7
0.85
1
0.7
0.85
1
0.7
0.85
1
20
5.45
2.65
1.15
5.77
2.79
1.12
5.50
2.77
1.14
4.87
2.54
0.79
25
5.31
2.67
0.58
5.56
2.70
0.61
5.36
2.64
0.61
4.87
2.48
0.43
φen
SNR (dB)
30
5.24
2.66
0.33
5.64
2.67
0.32
5.39
2.65
0.33
4.88
2.56
0.24
35
5.11
2.65
0.18
5.56
2.71
0.18
5.13
2.69
0.18
4.83
2.52
0.14
In all these algorithms, FCLS [10] is used to get the abundance
maps. The performance of the algorithms under test is evaluated
by performing 50 Monte Carlo runs for various purity levels (ρ)
and SNRs [8]. The simulation settings are L = 1000 (number of
pixels), N = 6 (number of endmembers) and M = 417 (number
of observations). In each run, 1000 noise-free observed pixel vectors were synthetically generated following the signal model in (2),
and the 6 endmembers (i.e., Alunite, Buddingtonite, Calcite, Copiapite, Kaolinite, and Muscovite) with 417 bands are selected from
USGS library [11], and the abundance vectors s[n] were generated
following Dirichlet distribution D(s[n], µ) with µ = 1N /N [4],
for purity levels ρ = 0.7, 0.85, 1. The synthetic data for different SNRs were obtained by adding independent and identically distributed zero-mean Gaussian noise to the
generated,
noise-free data
2
2
as per (1), and the SNR is defined as L
x[n]
/σ
M L. In
2
n=1
our simulations, the noise covariance matrix is estimated from the
observations, using the procedure elaborated in HySime [12]. The
performance index employed is the root-mean-square (rms) spectral
angle between the true one and estimated one [1,4,8]. The rms spectral angles between endmembers and their estimates are denoted as
φen , and those between abundance maps and their estimates are denoted as φab . The average φen and φab of the unmixing algorithms
over SNR = 20, 25, ..., 40 dB and ρ = 0.7, 0.85, 1 are shown in
Table 1, where each bold-faced number denotes the minimum rms
spectral angle associated with a specific pair of (ρ, SNR) over all
the algorithms. One can readily infer from Table 1 that the proposed
RAVMAX algorithm generally yields the best performance for all
the values of ρ and SNRs.
6. CONCLUSION
To account for the noise effects in an HU framework, we have presented a robust HU algorithm, i.e., RAVMAX. Here, we reformulated the original AVMAX problem with deterministic constraints
into the one with chance constraints. The RAVMAX problem can
be efficiently solved by using available second-order cone program
solvers. The simulation results demonstrate the superior performance of RAVMAX over some existing benchmark HU algorithms
including the original AVMAX. The performance of RAVMAX with
real hyperspectral data is currently under investigation.
7. APPENDIX
Proof of Lemma 2: Firstly, it is trivial to show that the objective
function of (19) is equivalent to that of (13) as αj = Bνj , bTj B =
|bj |T and BB = I. Next, consider the subproblem (19) (after ignoring the constant term in the objective function) and let S denote
40
5.16
2.61
0.10
5.50
2.61
0.11
5.10
2.65
0.10
4.90
2.51
0.08
20
22.54
9.60
6.14
31.57
10.83
6.00
24.60
9.15
6.39
18.95
8.56
4.34
25
21.86
8.37
3.59
29.97
9.45
3.45
21.94
7.96
3.66
18.15
7.68
2.60
φab
SNR (dB)
30
21.63
8.03
2.13
29.71
9.00
2.05
20.95
7.10
2.13
18.13
7.44
1.56
35
19.76
7.93
1.24
28.54
8.89
1.23
18.77
6.70
1.22
17.83
7.39
0.98
40
19.82
7.77
0.72
28.38
8.82
0.76
16.48
6.48
0.70
17.94
7.34
0.59
the constraint set of (19). Then we have
max |bj |T αj = |bj |T kj ,
αj ∈S
where kj = [maxSi [αj ]i ]N−1
i=1 in which Si = {[αj ]i ≤ [BX̃θj ]i },
implying that an optimal solution, denoted by (α⋆j , θj⋆ ) will make
the equality in αj BX̃θj hold (i.e., the constraint will be active). In other words, the optimal solution (α⋆j , θj⋆ ) belongs to the
set {(αj , θj ) | αj = BX̃θj , θj 0, 1TL θj = 1}, which is equivalent to the constraint set of (13). Hence we can conclude that the
subproblems (19) and (13) are equivalent. By a similar argument the
equivalence of (20) and (14) can be proved.
8. REFERENCES
[1] N. Keshava and J. Mustard, “Spectral unmixing,” IEEE Signal Process.
Mag., vol. 19, no. 1, pp. 44-57, Jan. 2002.
[2] J. W. Boardman, F. A. Kruse, and R. O. Green, “Mapping target signatures via partial unmixing of AVRIS data,” in Proc. Summ. JPL Airborne Earth Sci. Workshop, Pasadena, CA, Dec. 9-14, 1995, pp. 23-26.
[3] M. E. Winter, “N-findr: An algorithm for fast autonomous spectral
end-member determination in hyperspectral data,” in Proc. SPIE Conf.
Imaging Spectrometry, Pasadena, CA, Oct. 1999, pp. 266-275.
[4] J. M. P. Nascimento and J. M. B. Dias, “Vertex component analysis:
A fast algorithm to unmix hyperspectral data,” IEEE Trans. Geosci.
Remote Sens., vol. 43, no. 4, pp. 898-910, Apr. 2005.
[5] T. H. Chan, W.-K.Ma, C.-Y. Chi and A. Ambikapathi, “Hyperspectral unmixing from a convex analysis and optimization perspective,” in
Proc. First IEEE WHISPERS, Grenoble, France, August 26-28, 2009.
[6] A. Ambikapathi, T.-H. Chan, W.-K.Ma and C.-Y. Chi, “A robust minimum volume enclosing simplex algorithm for hyperspectral unmixing,
in Proc. IEEE ICASSP, Dallas, Mar. 14-19, 2010, pp. 1202-1205.
[7] S. Boyd and L. Vandenberghe, Convex Optimization, UK: Cambridge
Univ. Press, 2004.
[8] T.-H. Chan, C.-Y. Chi, Y.-M. Huang, and W.-K. Ma, “A convex analysis based minimum-volume enclosing simplex algorithm for hyperspectral unmixing,” IEEE Trans. Signal Processing, vol. 57, no. 11,
pp. 4418-4432, Nov. 2009.
[9] G. Strang, Linear Algebra and Its Applications, CA: Thomson, 4th
edition, 2006.
[10] D. Heinz and C.-I. Chang, “Fully constrained least squares linear
mixture analysis for material quantification in hyperspectral imagery,”
IEEE Trans. Geosci. Remote Sens., vol. 39, no. 3, pp. 529-545, Mar.
2001.
[11] Tech. Rep., Available online: http://speclab.cr.usgs.gov/
cuprite.html.
[12] J. M. B. Dias and J. M. P. Nascimento, “Hyperspectral subspace identification,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 8, pp. 24352445, Aug. 2008.