Content uploaded by Hong Huang
Author content
All content in this area was uploaded by Hong Huang on Jan 06, 2022
Content may be subject to copyright.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 1
Unsupervised Dimensionality Reduction
for Hyperspectral Imagery via Local
Geometric Structure Feature Learning
Guangyao Shi, Hong Huang ,Member, IEEE, and Lihua Wang
Abstract— Hyperspectral images (HSIs) possess a large num-
ber of spectral bands, which easily lead to the curse of dimen-
sionality. To improve the classification performance, a huge
challenge is how to reduce the number of spectral bands and
preserve the valuable intrinsic information in the HSI. In this
letter, we propose a novel unsupervised dimensionality reduction
method called local neighborhood structure preserving embed-
ding (LNSPE) for HSI classification. At first, LNSPE reconstructs
each sample with its spectral neighbors and obtains the optimal
weights for constructing the adjacency graph by modifying its loss
function. Then, to discover the scatter information of the training
samples, LNSPE minimizes the scatter between the pixels and
the corresponding neighbors and maximizes the total scatter of
the HSI data. Finally, it incorporates the scatter information and
the dual graph structure to enhance the aggregation of the HSI.
As a result, LNSPE can effectively reveal the intrinsic structure
and improve the classification performance of the HSI data. The
experimental results on two real hyperspectral data sets exhibit
the efficiency and superiority of LNSPE to some state-of-the-art
methods.
Index Terms—Dimension ality reduction (DR), adjacency
graph, hyperspectral images (HSIs), intrinsic structure, scatter
information.
I. INTRODUCTION
HYPERSPECTRAL images (HSIs) are captured by satel-
liteborne and airborne sensors in hundreds of spectral
bands, and each pixel can be represented as a high-dimensional
vector [1], [2]. Although abundant spectral information is
beneficial to improve the classification performance, it will
cause the increase in computational complexity, which requires
huge computational resources and storage capacity [3], [4].
Furthermore, the high-dimensional characteristic of the HSI
data often cause the Hughes phenomena, especially when there
are only few training samples available [5]. Therefore, it is an
urgent task to reduce the dimensionality of the HSI data while
preserving the useful intrinsic information.
Manuscript received April 24, 2019; revised July 16, 2019; accepted
September 15, 2019. This work was supported in part by the
Basic and Frontier Research Programmes of Chongqing under Grant
cstc2018jcyjAX0093 and Grant cstc2018jcyjAX0633, in part by the
Chongqing University Postgraduates Innovation Project under Grant
CYB18048 and Grant CYS18035, and in part by the National
Science Foundation of China under Grant 41371338. (Corresponding author:
Hong Huang.)
The authors are with the Key Laboratory of Optoelectronic Tech-
nology and Systems of the Education Ministry of China, Chongqing
University, Chongqing 400044, China (e-mail: shiguangyao@cqu.edu.cn;
hhuang@cqu.edu.cn; 20170802019t@cqu.edu.cn).
Color versions of one or more of the figures in this letter are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/LGRS.2019.2944970
Dimensionality reduction (DR) serves as an effective tech-
nique to address the aforementioned issues, and it aims to
find a low-dimensional space where some desired properties
can be preserved. In recent years, many effective DR meth-
ods have been developed for classification [6], [7]. Principal
component analysis (PCA) and linear discriminant analy-
sis (LDA) are two well-known methods based on subspace
learning, PCA aims at finding a projection direction along
which the data have the maximum variance [8], while LDA
seeks for the optimal projection matrix by maximizing the
between-class variance and minimizing the within-class vari-
ance in the low-dimensional space [9]. Although these DR
methods proved to be effective for classification, they do not
consider the manifold structure on which high-dimensional
data may possibly reside [10], [11].
Recently, various manifold learning methods have been
developed for DR, including local linear embedding (LLE)
[12], Laplacian eigenmaps (LEs) [13], and local tangent
space alignment (LTSA) [14]. LLE assumes that the global
manifold can be reconstructed by several small overlapped
regions, and it performs linearization to reconstruct the local
properties of data point by its neighbors. LE builds a graph
incorporating neighborhood information of data and computes
a low-dimensional representation by optimally preserving local
neighborhood information. LTSA represents the local geome-
try of the manifold using tangent spaces learned by fitting an
affine subspace in a neighborhood of each data point. Owing
to their nonlinear characteristic, these methods suffer from the
problem of out-of-sample, for without project vector during the
DR process.
To tackle this problem, many linear manifold learning
methods were proposed, such as neighborhood preserving
embedding (NPE) [15], locality preserving projections (LPP)
[16], and linear LTSA (LLTSA) [17]. However, they cannot
effectively reveal the structural relationships of pairwise neigh-
bors, which limits their discriminating ability for land use clas-
sification. To unify the above DR methods, a graph-embedding
(GE) framework was proposed to describe many existing DR
techniques [18]. Sparse manifold embedding (SME) [19] uses
the sparse coefficients to construct a similarity graph and pre-
serves this sparse similarity in embedding space, which further
improves the classification performance of the HSI. Graph-
based discriminate analysis via spectral similarity (GDA-SS)
[20] uses the spectral difference of pairwise pixels and sets a
threshold to evaluate the similarity, and it can effectively reveal
the discriminant manifold structure of the data. However,
it requires the prior label information of samples, which limits
their application in certain scenes.
1545-598X © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Hong Huang. Downloaded on February 15,2020 at 15:54:38 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
2IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
To address the aforementioned problems, a new unsuper-
vised DR method called the local neighborhood structure
preserving embedding (LNSPE) algorithm was proposed for
HSI classification. At first, LNSPE reconstructs each sample
with its spectral neighbors, and it designs a novel loss function
to obtain the optimal weights for constructing an adjacency
graph. Then, it reveals the scatter information of training
samples by minimizing the local neighborhood scatter matrix
and maximizing the total scatter matrix of training samples
simultaneously. Finally, the scatter information and the dual
graph structure are integrated to enhance the aggregation
of the HSI data. As a result, LNSPE can extract effective
discriminant features and subsequently improve the classifi-
cation performance of the HSI. Experimental results show
that LNSPE achieved better performance on the PaviaU and
Botswana HSI data sets than some of the state-of-the-art DR
methods.
This letter is structured as follows. Section II gives
a detailed description of the proposed LNSPE method.
Section III presents the experimental results on the PaviaU
and Botswana HSI data sets to evaluate the effectiveness of
the LNSPE. Finally, Section IV concludes this letter and gives
some suggestions for future work.
II. PROPOSED METHOD
Suppose a data set is composed of npoints, ith point can
be denoted as xi∈
D,whereDis the band number. Let
(xi)∈{1,2,...,c}be the class label of xi,andcindicates
the number of land cover types in the HSI. The goal of the
linear DR methods is to obtain a projection vector V∈
D×d,
which can map X=[x1,x2,...,xn]∈
D×nto Y=
[y1,y2,...,yn]∈
d×n,whereYis the low-dimensional
representation of Xand dDis the embedding dimension in
low-dimensional space. With the projection vector V, we can
compute Yas Y=VTX.
A. Dual Structure Preserving Model
In the HSI, the similarity between different pixels is usually
measured by the spectral-domain Euclidean distance. Two
pixels with a small distance may have a large probability
belonging to the same class. To effectively use the spectral
neighborhood information in the HSI, we reconstruct each
sample with its knearest neighbors (NNs). Denote the kNNs
of xias S(xi)=[xi1,xi2,...,xik ], and the reconstructed
pixel x∗
ican be given as follows:
x∗
i=xj∈S(xi)vjxj
xj∈S(xi)vj
=k
m=1 vmxim
k
m=1 vm
(1)
where vmis the weight of xim, and it can be calculated as
vm=exp−||xi−xim||2
2t2
i(2)
where ti=(1/k)k
h=1 ||xi−xih|| is a kernel parameter.
After obtaining the reconstructed pixels, we have origin
pixels Xand reconstructed pixels X∗. Then, we can construct
two graphs G(X,W)and G∗(X∗,W),andXand X∗serve
as the vertexes of graphs Gand G∗, respectively. Wis the
corresponding weight matrix in graphs Gand G∗. Denote wij
as the weight of the edge from node ito node j, and it can be
calculated by the following redefined loss function:
⎧
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎩
min
n
i=1
xi−
k
j=1
wij xj+x∗
i−
k
j=1
wij x∗
j
2
s.t.
k
j=1
wij =1 ∀xj∈S(xi).
(3)
Denote the kth spectral neighbors of xias xk
i,andhk
i=
xi−xk
i+x∗
i−x∗k
imeasures the spectral similarity between
xiand its kth spectral neighbor, and the objective function can
be simplified as
J(W)=min
n
i=1
xi−
k
j=1
wij xj+x∗
i−
k
j=1
wij x∗
j
2
=min
n
i=1
k
j=1
wij xi−xj+x∗
i−x∗
j
2
=min
n
i=1
wT
iziwi(4)
where zi=[h1
i,h
2
i,h
3
i,...,h
k
i]T[h1
i,h
2
i,h
3
i,...,h
k
i]and
wi=[wi1,w
i2,w
i3,...,w
ik]. Then, (4) can be given as
follows:
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
min
n
i=1
wT
iziwi
s.t.
k
j=1
wij =1.
(5)
With the Lagrange multiplier method, wij can be reduced as
wij =k
m=1 zjm
i−1
k
p=1 k
q=1 zpq
i−1(6)
where zjm
i=(hj
i)Thm
iand zpq
i=(hp
i)Thq
i.
With the weight matrix W, the projection vector Vfor
lower dimensional embedding can be obtained by solving the
following optimization problem:
⎧
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎩
min J=min
n
i=1
yi−
k
j=1
wij yij
2
=minVTXMXTV
s.t.
n
i=1
yi=0,1
nVVT=I
(7)
in which M=(I−W)(I−W)Tand I=diag(1,1,...,1).
(7) can be solved by the Lagrange multiplier, and it can be
transformed into the following form:
XMXTV=λXXTV.(8)
Authorized licensed use limited to: Hong Huang. Downloaded on February 15,2020 at 15:54:38 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SHI et al.: UNSUPERVISED DR FOR HYPERSPECTRAL IMAGERY 3
B. Neighborhood Scatter Extraction Model
Considering that the pixels with similar spectrum are most
likely to belong to the same class, we use the scatter informa-
tion of each sample to learn the projection relationships from
high-dimensional space to lower-dimensional space.
Suppose the rNNs of xican be represented as N(xi)=
[xi1,xi2,…, xir], and then the scatter matrix between xiand
N(xi) can be expressed as follows:
hi=r
p=1 vp(xi−xip)(xi−xip )T
r
q=1 vq
(9)
where weight vpdefined in (2) measures the spectral similarity
between the neighboring pixels to the central pixel.
For all the pixels in the HSI, the scatter can be represented
as
H=
n
i=1
hi=n
i=1 r
p=1 vp(xi−xip)(xi−xip )T
r
q=1 vq
.(10)
In addition, the total scatter of training samples can be
defined as follows:
S=
n
i=1
(xi−¯x)(xi−¯x)T(11)
where ¯x is the mean of the training samples.
We try to seek a linear projection matrix such that the local
neighborhood preserving scatter is minimized, whereas the
total scatter is maximized in the embedding space. Therefore,
the optimal projection Vcan be obtained by solving the
generalized eigenvalue problem as
HV =λSV (12)
where λis the eigenvalue of (12).
C. Neighborhood Structure Joint Feature Learning
To learn a more effective projection, we propose an LNSPE
algorithm for HSI data, which preserves the scatter information
and the dual graph structure to obtain discriminant features.
Combining (8) and (12), the projection matrix Vcan be
obtained by solving the following eigenvalue problem:
[(1 −a)XMXT+aH]V=λ[(1 −a)XXT+aS]V(13)
where ais a nonnegative tradeoff parameter between the local
neighborhood structure and the dual graph structure. With
the eigenvectors v1,v
2,...,v
dcorresponding to the first d
eigenvalues, the optimal projection matrix can be represented
as V=[v1v2... v
d]. Then, the low-dimensional embedding
of HSI data can be obtained by Y=VTX.
III. EXPERIMENTAL RESULTS AND ANALYSIS
In this section, two public HSI data sets are adopted to
demonstrate the effectiveness of LNSPE by comparing it with
some state-of-the-art DR algorithms.
Fig. 1. OAs for (a) PaviaU and (b) Botswana data sets under different values
of a.
Fig. 2. OAs for the PaviaU and Botswana data sets with different dimensions.
Fig. 3. OAs for (a) PaviaU and (b) Botswana data sets with kand r.
TAB L E I
CLASSIFICATION RESULTS [PERCENTAGE (%)] USING DIFFERENT
ALGORITHMS FOR PaviaU DATA SET
A. Data Description
1) Pavia University: It was collected by the Reflective
Optics System Imaging Spectrometer (ROSIS) sensor over
Pavia University in 2002. The full scene consists of 610 ×340
pixels with 115 spectral bands, and all pixels belong to nine
classes. Considering that 12 bands suffered from serious water
absorption, the remaining 103 bands are used for experiments.
2) Botswana: It was collected by the NASA EO-1 satellite
over Okavango Delta, Botswana in 2001. The full scene
consists of 1476 ×256 pixels with 242 spectral bands, and
all pixels belong to 14 classes. After removing 97 bands due
Authorized licensed use limited to: Hong Huang. Downloaded on February 15,2020 at 15:54:38 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
4IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
Fig. 4. Classification maps for different methods with NN classifier on the PaviaU data set. (a) Ground truth. (b) Training samples. (c) RAW. (d) PCA.
(e) NPE. (f) LPP. (g) LDA. (h) LFDA. (i) MMC. (j) MFA. (k) LGSFA. (l) GDA-SS. (m) SME. (n) LNSPE.
TAB L E I I
CLASSIFICATION RESULTS [PERCENTAGE (%)] USING DIFFERENT
ALGORITHMS FOR BOT SWANA DATA SET
to serious noise affection, the remaining 145 bands are used
for experiments.
B. Experimental Setup
In each experiment, we randomly divided the HSI data
into the training and test sets, and employed the NN for
classification. Then, overall classification accuracy (OA) was
used to evaluate the effectiveness of each algorithm. For
robustness, all the experiments were repeated for ten times.
To verify the classification performance of LNSPE, we com-
pared it with RAW, PCA, NPE, LPP, LDA, local Fisher
discriminant analysis (LFDA), maximum margin criterion
(MMC), marginal Fisher analysis (MFA), local geometric
structure Fisher analysis (LGSFA) [6], GDA-SS, and SME,
and the RAW method indicates that the OAs are obtained
by the NN classifier without DR. Among all DR algorithms,
PCA, NPE, LPP, and SME are unsupervised algorithms that
do not use prior label information during the process of DR,
while LDA, LFDA, MMC, MFA, LGSFA, and GDA-SS are
supervised methods, which use the prior label information to
improve the classification performance. To better compare the
classification results, we choose optimal parameters for each
algorithm. For NPE, LPP, and LFDA, we set the number
of neighbors as 7. For MFA and LGSFA, they have two
important parameters of intraclass neighbor k1and interclass
neighbor k2=βk1,andwesetk1=9and β=20,
respectively. As for LNSPE, parameter ais tuned in the range
of {0,0.1,0.2,...,1}, and the results are shown in Fig. 1,
where 30 labeled samples per class were selected for training,
and the remaining samples were for testing.
As we can see from Fig. 1, the optimal parameter is a=0.3
for PaviaU and a=0.5for Botswana. Fig. 2 shows the OAs
of each method versus different embedding dimensions. It can
be seen that the OAs of all DR algorithms increase gradually
with the increase in d, and then maintain a stable value when
the dimension exceeds 20. Therefore, we set the embedding
dimension dto 30, and the dimension of the LDA algorithm
is set as c−1,wherecis the class number of the HSI.
Fig. 3 shows the OAs with different values of kand r.
As we can see, the OAs first improve with the increase in
kand then maintain a stable value. The reason is that a
larger number of spectral neighbors are helpful to extract
discriminant features for HSI classification. However, if kis
too large, the discriminant information in the neighborhood
structure will be redundant for DR of HSI. Furthermore,
the OAs quickly ascend and then decrease with the increase
in r, which indicates that a larger ris not conducive to the
further improvement of OAs. Based on the above analysis,
we choose r=2, k=25 for the PaviaU data set, and r=3,
k=25 for the Botswana data set in the following experiments.
C. Classification Results
To demonstrate the classification performance of various DR
algorithms on the PaviaU and Botswana data sets, ni(ni=20,
30, 40, 50, 60) samples were randomly selected from each
class as train samples, and the remaining samples were used
for test samples. Tables I and II are the classification results for
the PaviaU and Botswana data sets, respectively. Fig. 4 shows
the classification maps for different methods on the PaviaU
Authorized licensed use limited to: Hong Huang. Downloaded on February 15,2020 at 15:54:38 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
SHI et al.: UNSUPERVISED DR FOR HYPERSPECTRAL IMAGERY 5
TABLE III
COMPUTATIONAL TIME (IN SECONDS)OF DIFFERENT ALGORITHMS ON THE PaviaU AND BOTSWANA DATA SETS
data set, in which 1% samples were randomly selected from
each class for training, and the remaining were used for testing.
As can be seen from Tables I and II, the OAs of all
algorithms improved with the increase of ni. The reason is that
more training samples can lead to abundant information for
feature learning, which is helpful to HSI classification. Among
all the DR methods, LNSPE achieves better classification per-
formance than other state-of-the-art DR algorithms. As shown
in Fig. 4, the classification map of LNSPE is smoother
than other algorithms, especially in the regions of Asphalt,
Meadows,Bitumen,andShadows. Because it considers the
relationships of pairwise neighbors in adjacency graph and
scatter information of samples, which is helpful to discover the
intrinsic structure of the HSI data and extract more effective
discriminant features.
D. Computational Complexity
As for the proposed LNSPE method, the reconstructed
pixels X∗is calculated with O(nk). The reconstruction weight
matrix Wcosts O(nk3). The cost of Mis O(n2). The cal-
culation of XMXTtakes O(Dn2). The local neighborhood
preserving scatter matrix Hand the total scatter matrix S
take O(n2r2)and O(n), respectively. The generalized eigen-
value problem of (13) is computed with the cost of O(D3).
Therefore, the total computational complexity of LNSPE is
O(nk3+Dn2+n2r2+D3), and it mainly depends on size of
training samples, band number, and the neighbors number.
To quantitatively compare the complexity of each algorithm,
we show the computational time of each algorithm in Table III.
All the results were obtained on a personal computer, which
has the i3-7100 CPU and 12-G memory. The version of
Windows system and MATLAB are 64-bit Windows 10 and
2017a, respectively. As shown in Table III, the proposed
LNSPE method is slower than NPE. However, the slight
increase in computational time is acceptable relative to the
improvement on classification performance.
IV. CONCLUSION
This letter proposed a novel unsupervised DR method called
LNSPE algorithm for HSI classification. At first, LNSPE
considers the relationships of pairwise neighbors by con-
structing a dual adjacency graph. Then, LNSPE reveals the
scatter information of training samples by minimizing the
local neighborhood scatter and maximizing the total scatter of
training samples simultaneously. Finally, an optimal project
vector is learned by exploring the dual graph structure and
scatter information. LNSPE can effectively reveal the intrinsic
structure of the HSI and extract effective discriminant features,
which bring benefits for enhancing the classification perfor-
mance of the HSI. Experimental results on two real data sets
demonstrated the superiority of LNSPE for HSI classification.
Our future work will focus on how to combine the priori label
information to further improve the classification performance
of LNSPE.
REFERENCES
[1] H. Luo, C. Liu, C. Wu, and X. Guo, “Urban change detection based
on Dempster–Shafer theory for multitemporal very high-resolution
imagery,” Remote Sens., vol. 10, no. 7, pp. 980, Jun. 2018.
[2] H. Huang, G. Shi, H. He, Y. Duan, and F. Luo, “Dimension-
ality reduction of hyperspectral imagery based on spatial-spectral
manifold learning,” IEEE Trans. Cybern., to be published. doi:
10.1109/TCYB.2019.2905793.
[3] F. Luo, B. Du, L. Zhang, L. Zhang, and D. Tao, “Feature learning
using spatial-spectral Hypergraph discriminant analysis for hyperspectral
image,” IEEE Trans. Cybern., vol. 49, no. 7, pp. 2406–2419, Jul. 2019.
[4] J. Peng, W. Sun, and Q. Du, “Self-paced joint sparse representation for
the classification of hyperspectral images,” IEEE Trans. Geosci. Remote
Sens., vol. 57, no. 2, pp. 1183–1194, Feb. 2019.
[5] Z. Wang, B. Du, L. Zhang, L. Zhang, and X. Jia, “A novel semisuper-
vised active-learning algorithm for hyperspectral image classification,”
IEEE Trans. Geosci. Remote Sens., vol. 55, no. 6, pp. 3071–3083,
Jun. 2017.
[6] F. Luo, H. Huang, Y. Duan, J. Liu, and Y. Liao, “Local geometric
structure feature for dimensionality reduction of hyperspectral imagery,”
Remote Sens., vol. 9, no. 8, pp. 790, Aug. 2017.
[7] W. He, H. Zhang, L. Zhang, W. Philips, and W. Liao, “Weighted sparse
graph based dimensionality reduction for hyperspectral images,” IEEE
Geosci. Remote Sens. Lett., vol. 13, no. 5, pp. 686–690, May 2016.
[8] R. Hang and Q. Liu, “Dimensionality reduction of hyperspectral image
using spatial regularized local graph discriminant embedding,” IEEE
J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 11, no. 9,
pp. 3262–3271, Sep. 2018.
[9] H. Xu, H. Zhang, W. He, and L. Zhang, “Superpixel-based spatial-
spectral dimension reduction for hyperspectral imagery classification,”
Neurocomputing, vol. 360, pp. 138–150, Sep. 2019.
[10] Y. Zhou, J. Peng, and C. L. P. Chen, “Dimension reduction using spatial
and spectral regularized local discriminant embedding for hyperspectral
image classification,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 2,
pp. 1082–1095, Feb. 2015.
[11] H. Yu, L. Gao, W. Li, Q. Du, and B. Zhang, “Locality sensitive dis-
criminant analysis for group sparse representation-based hyperspectral
imagery classification,” IEEE Geosci. Remote Sens. Lett., vol. 14, no. 8,
pp. 1358–1362, Aug. 2017.
[12] Y. Chen, Z. Lai, W. Wong, L. Shen, and Q. Hu, “Low-rank linear
embedding for image recognition,” IEEE Trans. Multimedia, vol. 20,
no. 12, pp. 3212–3222, Dec. 2018.
[13] W. Sun, G. Yang, B. Du, L. Zhang, and L. Zhang, “A sparse and low-
rank near-isometric linear embedding method for feature extraction in
hyperspectral imagery classification,” IEEE Trans. Geosci. Remote Sens.,
vol. 55, no. 7, pp. 4032–4046, Jul. 2017.
[14] J. Wang, X. Sun, and J. Du, “Local tangent space alignment via nuclear
norm regularization for incomplete data,” Neurocomputing, vol. 273,
pp. 141–151, Jan. 2018.
[15] S. Wang and W. Zhu, “Sparse graph embedding unsupervised feature
selection,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 48, no. 3,
pp. 329–341, Mar. 2018.
[16] F. Zhong, J. Zhang, and D. Li, “Discriminant locality preserving
projections based on L1-norm maximization,” IEEE Trans. Neural Netw.
Learn. Syst., vol. 25, no. 11, pp. 2065–2074, Nov. 2014.
[17] Y. Lu, Z. Lai, Z. Fan, J. Cui, and Q. Zhu, “Manifold discriminant
regression learning for image classification,” Neurocomputing, vol. 166,
pp. 475–486, Oct. 2015.
[18] Y. Wei, Y. Zhou, and H. Li, “Spectral-spatial response for hyperspectral
image classification,” Remote Sens., vol. 9, no. 3, pp. 203, Feb. 2017.
[19] H. Huang, F. Luo, J. Liu, and Y. Yang, “Dimensionality reduc-
tion of hyperspectral images based on sparse discriminant manifold
embedding,” ISPRS J. Photogramm. Remote Sens., vol. 106, pp. 42–54,
Aug. 2015.
[20] F. Feng, W. Li, Q. Du, and B. Zhang, “Dimensionality reduction
of hyperspectral image with graph-based discriminant analysis con-
sidering spectral similarity,” Remote Sens., vol. 9, no. 4, pp. 323,
Mar. 2017.
Authorized licensed use limited to: Hong Huang. Downloaded on February 15,2020 at 15:54:38 UTC from IEEE Xplore. Restrictions apply.