Fast Spatial Spectral Schroedinger Eigenmaps ... - Science Direct

0 downloads 0 Views 1MB Size Report
[10] Zhai Yongguang, Lifu Zhang, Nan Wang, Yi Guo, Yi Cen, Taixia Wu, Qingxi Tong. ... [15] Chen Yanqing, Timothy A. Davis, William W. Hager, Sivasankaran ...
Available online at www.sciencedirect.com

ScienceDirect ScienceDirect

Procedia Computer Science 00 (2018) 000–000 Procedia Computer Science 00 (2018) 000–000

Available online at www.sciencedirect.com

ScienceDirect

www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia

Procedia Computer Science 126 (2018) 656–664

22nd International Conference on Knowledge-Based and Intelligent Information & 22nd International Conference Engineering on Knowledge-Based Systems and Intelligent Information & Engineering Systems

Fast Spatial Spectral Schroedinger Eigenmaps algorithm for Fast Spatial Spectral Schroedinger Eigenmaps algorithm for hyperspectral feature extraction hyperspectral feature extraction a a

Asma Fejjaria,a,*, Karim Saheb Ettabaabb, Ouajdi Korbaaaa Asma Fejjari *, Karim Saheb Ettabaa , Ouajdi Korbaa

MARS (Modeling of Automated Reasoning Systems) Research Laboratory, Higher Institute of Computer Science and Communication Technologies, G.P.1 Hammam 4011, University of Sousse, Tunisia. Science and Communication MARS (Modeling of Automated Reasoning Systems) ResearchSousse Laboratory, Higher Institute of Computer b IMT Atlantique, Iti Department, Telecom Bretagne, Street of Technopôle 29200 Plouzané, France. Technologies, G.P.1 Hammam Sousse655 4011, University of Sousse, Tunisia. b IMT Atlantique, Iti Department, Telecom Bretagne, 655 Street of Technopôle 29200 Plouzané, France.

Abstract Abstract Based on the Laplacian Eigenmaps (LE) algorithm and a potential matrix, the Spatial Spectral Schroedinger Eigenmaps (SSSE) technique has Laplacian proved aEigenmaps great yield(LE) during the hyperspectral dimensionality reduction process. Experimentally, SSSE(SSSE) is in Based on the algorithm and a potential matrix, the Spatial Spectral Schroedinger Eigenmaps deficiency of high computing maythehinder its contribution in the remote sensing field. InExperimentally, this paper, a fastSSSE variant technique has proved a greattime yieldwhich during hyperspectral dimensionality reduction process. is of in the SSSE approach, called Fast SSSE, wasmay proposed. Thecontribution new suggested method the quadratic constraint employed deficiency of high computing time which hinder its in the remotesubstitutes sensing field. In this paper, a fast variant of during the approach, optimization problem, by a linear constraint.The Thisnew overhaul preserves the substitutes data properties in analogous way to employed the SSSE the SSSE called Fast SSSE, was proposed. suggested method the quadratic constraint technique, but with a fast implementation. Two real This hyperspectral data sets the were adopted during the experimental process. during the optimization problem, by a linear constraint. overhaul preserves data properties in analogous way to the SSSE Experimentbut analysis good classification accuracy with a reduced with the process. original technique, with aexhibited fast implementation. Two real hyperspectral data setscomputational were adoptedeffort, duringcompared the experimental SSSE approach. Experiment analysis exhibited good classification accuracy with a reduced computational effort, compared with the original SSSE approach. © 2018 The Authors. Published by Elsevier Ltd. © 2018 2018 The Authors. by Ltd. © The Authors. Published by Elsevier Elsevier Ltd. This is an open accessPublished article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/ ) This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Selection under responsibility of KES International. This is an and openpeer-review access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/ ) Selection and peer-review under responsibility of KES International. Selection and peer-review under responsibility of KES International. Keywords: Laplacian Eigenmaps; potential matrix; Spatial Spectral Schroedinger Eigenmaps; hyperspectral dimensionality reduction; optimization problem;Eigenmaps; linear constraint. Keywords: Laplacian potential matrix; Spatial Spectral Schroedinger Eigenmaps; hyperspectral dimensionality reduction; optimization problem; linear constraint.

1. 1.

Introduction Introduction Due to its composition of tens or even hundreds of narrow spectral bands, hyperspectral images pave the way for Due to its composition of tens or even hundreds of narrow spectral bands, hyperspectral images pave the way for

* Corresponding author. E-mail address: [email protected] * Corresponding author. E-mail address: [email protected] 1877-0509 © 2018 The Authors. Published by Elsevier Ltd. T his is an open access under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) 1877-0509 © 2018 Thearticle Authors. Published by Elsevier Ltd. Selection under responsibility of KES International. T his is an and openpeer-review access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Selection and peer-review under responsibility of KES International. 1877-0509 © 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Selection and peer-review under responsibility of KES International. 10.1016/j.procs.2018.07.300



Asma Fejjari et al. / Procedia Computer Science 126 (2018) 656–664 Author name / Procedia Computer Science 00 (2018) 000–000

657

very rich, accurate and detailed information about the covered Earth surface area. The high dimensional nature obviously leads to redundancy and computational burden issues. Accordingly, reducing t he hyperspectral dimensionality, keeping the significant information, is a key pre-p rocessing for hyperspectral image analysis [1]-[3]. In the last few years, several dimensionality reduction techniques have been set up; these methods can be categorized into two main classes [3]-[4]: feature extract ion techniques and feature selection ones. While the previous approaches aim to select the most representative data set of sub -spaces, the latter techniques seek to choose an optimal subset of bands from the original band set, according to specific performance criteria. The ch ief difference between these two classes is whether a dimensionality reduction technique transforms or conserves the meaning of the original data set in the reduction process. In this paper, we d iscuss feature extract ion techniques. Feature extract ion algorith ms tend to find a set of point Y = {y1 , y𝟐𝟐 …, yp } in Ʀd of an original data set X = {x1 , x2 , …, xp } ∈ Ʀn (d ≪ n), which protects all information required. Principal co mponent analysis (PCA) [6], isometric feature mapping (Isomap) [7], locally linear embedding (LLE) [8] and Lap lacian Eigen maps (LE) [9] are among the most popular and powerful feature extract ion algorith ms. This study focus es specifically on a robust variant of LE [9] called Spatial Spectral Schroedinger Eigenmaps (SSSE) [11]. SSSE is a semi supervised learning technique which exp loits spatial and spectral data in a hyperspectral data set based on the LE algorithm and a Schrodinger operator. Pract ically, SSSE is an effectiv e reduction technique; it offers outstanding results for classificat ion tasks, but it suffers of high processing time. Thereupon, to overcome this computational burden, we propose a fast variant of SSSE which addresses to decrease its computing effort. Our proposition consists in rep lacing the LE framework, in the SSSE approach, by a fast variant of the same technique called “Fast LE” [12]. The new framework substitutes the quadratic constraint used in the quadratic optimization problem by a new linear cons traint. Co mpared to the original SSSE approach, this reformu lation provided an efficient co mputable solution. The resolution of the newly formulated problem can be achieved by a fast sparse Cholesky decomposition [14]-[15] that explo res the sparse nature of the potential matrix. The new p roposed technique would provide a much mo re well -formu lated idea for performing hyperspectral dimension reduction in a computationally efficient manner without affecting the significant HSI data properties. In this paper, we will provide a fast appro ximation to the SSSE algorithm, based on the Fast LE framework, and studies its proficiency for hyperspectral image classification tasks. The other parts of this paper are organized as follows: Section 2 introduces the Fast LE technique. Section 3 describes in detail the new proposed algorith m, experiment evidence is reported in section 4 followed by conclusion notes and future works. 2.

Fast Laplacian Eigenmaps (Fast LE)

Fast LE [12] is a recently proposed dimension reduction tech nique; it can be recognized as the fast variant of the LE [9] algorithm. Fast LE is a qualified feature extraction technique that presents a fast implementation and competent results during many image analysis tasks. In a similar way to the LE method, the initial data, in the Fast LE approach, is built fro m a spectral graph, based on the Laplacian theory. Once the dataset neighbourhood graph is constructed, eigenvectors can be computed, from a transformation matrix, by solving the quadratic optimizat ion problem. Generalized eigenvectors serve as the basis of the low-dimensional space. The transformation matrix encodes distances between data points and its neighbourhoods. Unlike the conventional LE, the Fast LE uses a linear constraint in the quadratic optimizat ion problem instead of the quadratic constraint. The proposed linear constraint makes the optimizat ion of the objective function much easier than the quadratic constraint, which helps to improve the computing time. Given a hyperspectral image X = {x1 , x2 , … , xp } ∈ Ʀn organized as a [p × n] mat rix where 𝑝𝑝 is the total pixel number in the hyperspectral scene and n is the spectral band number, its reduced dimensional representation Y = { y1 , y2 , … , yp }, is found by following the next three steps: 1- Build a weighted graph G (N, E) where N is the set of nodes and E represents edges that connect closes nodes. Nodes in the graph 𝐺𝐺 present pixels in the hyperspectral image and similar pixels are connected by edges. 2-

Compute weights: By referring to the graph 𝐺𝐺, weights, can be defined as follows:

Asma Fejjari et al. / Procedia Computer Science 126 (2018) 656–664 Author name / Procedia Computer Science 00 (2018) 000–000

658

ωij = {e



f f || xi - xj||2 σ2f

0

, if i and j are connected , otherwise

(1)

f 2

||xfi - xj || represents the spectral distance between two nodes (pixels), and 𝜎𝜎𝑓𝑓 is a fixed spectral scale parameter.

3- Solve the quadratic optimization problem: The original LE technique tends to optimize the following objective function: min YT DY = I Tr(YT LY)

(2)

Where YT DY = I is a quadratic constraint used to prohibit a zero matrix fro m being the optimal solution to the following eigenvector problem: LY = λ DY

(3)

Where L = D – ω corresponds to the Laplacian matrix, D is the diagonal weighted degree matrix defined by Di,i = ∑j ωi, j , I is the identity matrix and {λi }di =1 represents the d-smallest eigenvalues of (3). The new space is obtained by the eigenvectors correspond to the d-smallest eigenvalues. Contrary to LE, the Fast LE algorith m [12] uses a linear constraint (4) in the objective function: Tr(YT BZ) = ρ

(4)

The linear constraint prevents a solution where all the rows of Y are not precisely the same lead ing to f ||xfi - xj || 2 being 0 ⩝ i, j. Due to the quadratic nature of the loss function, the linear kind of the new constraint helps to impose it, thereby the generation of a much simpler optimization p roblem. The loss function is determined by combining the linear constraint and the convex objective function: T(Y, β) = Tr(YT LY) + β [Tr( YT BZ) - ρ]

(5)

The loss function (5) can be solved as follows: Y* =

β 2

L+ BZ

(6)

Where B is a Laplacian established upon an adjacency matrix o f all 1’s: B = Laplacian (1n×n ), Z represents a fixed row-unique matrix, ρ is a fixed scalar ≠ 0, and β is a positive multiplier. Fast LE proved a significant enhancement in speed while keeping a similar solution quality; which is seen very encouraging to adopt it during hyperspectral dimension reduction tasks . 3.

Proposed approach

SSSE [11] was introduced as a robust nonlinear approach for semi supervised classification; based on the LE technique and a spatial spectral matrix. SSSE technique uses spatial nondiagonal potentials to incorporate spatial and spectral data in the classification scheme. The original data is constructed from a graph built on spectral informat ion and a potential matrix that includes spatial relationships between hyperspectral image p ixels. Since it joint both the spectral and spatial features, SSSE generates more accurate, co mpelling and co mpetitive classificat ion results, compared to other well-known d imension reduction techniques. However, like the majo rity of nonlinear dimensionality reduction techniques [17], SSSE is co mputationally expensive, wh ich may limit its exploitat io n in the remote sensing field. In this paper, we investigate a new way to accelerate the SSSE approach by replacing the



Asma Fejjari et al. / Procedia Computer Science 126 (2018) 656–664 Author name / Procedia Computer Science 00 (2018) 000–000

659

classical LE framework by the Fast LE one [12]. This improvement can lessen the computational load without severely affecting the quality of classification and consequently gives more qualification to the SSSE approach. Assuming the hyperspectral image X = {x1 , x2 , … , xp } in Ʀn , is arranged in the form of a mat rix of size [p × n], where 𝑝𝑝 presents the total number of pixels and n is the number of spectral bands. The new suggested approach, called Fast SSSE, can be proceeded as follows:

1- Construct the adjacency graph G ( N, E) with E = [ p × p ] edges connecting nearby nodes N = (N1 , N2 , …, Np ) to each other. Pro ximity relations can be obtained by a KNN (K Nearest Neighbors) search [13]. Due to the specific hyperspectral nature, nodes in the graph conform to p ixels in the hyperspectral scene and lin ks between a pair of nodes present their spectral resemblance. 2-

Compute the weighted matrix, represents the connectivity of the graph G, using (1):

ωij = {e 3-



f f ||xi - xj||2 σ2f

(1)

, if i and j are connected 0 , otherwise

Compute the cluster potential matrix V: V includes the proximity between nodes spatial elements: V=

∑ ki =1 ∑ x s V(i,j) .γ .e ij j ∈ N (x ) ε i



||xsi - xsj ||2 σ2s

(7)

Where Nsε (xi ) corresponds to the X's points set, which has spatial elements in the ε- neighborhood of xi , xsi f

represents the pixel’s spatial information, 𝜎𝜎𝑆𝑆 is a spatial scale parameter, γij = exp (-

diagonal matrix, encodes spectral information, defined as follows: (i, j) V(k, l)

1, if (k, l) ∈ { (i, i) , (j, j) } = {-1, if (k, l) ∈ {(i, j) , (j, i) } 0, otherwise

f 2

||xi - xj || σ2f

) and V(i, j) is a non-

(8)

4- Solve the quadratic optimization problem: If the linear constraint (4) is formulated, the optimization problem can be much simpler to produce owing to the quadratic nature of the loss function. The total loss function T(Y, β) is given by: T(Y, β) = Tr(YT SY) + β [Tr(YT BZ) - ρ]

(9)

Tr (YT SY) is the convex objective function, and S is a Schroedinger matrix determined by incorporating the potential matrix V with the Laplacian one 𝐋𝐋: S = L + αV

(10)

The gradient of (9) is obtained by: ∇ T(Y, β) = 2SY + βBZ

(11)

Sitting the gradient equal to zero provides the following solution: β

Y* = S + BZ 2

(12)

Asma Fejjari et al. / Procedia Computer Science 126 (2018) 656–664 Author name / Procedia Computer Science 00 (2018) 000–000

660

Upon replacing Y by (12) in Tr(YT BZ) = ρ , the multiplier β will be updated as following: β=

- 2ρ

T

Tr( Z B S + B Z)

(13)

The resolution of the formu lated problem (12) can be achieved through a fast sparse Cholesky decomposition [14][15] that exp loits the Schroedinger matrix sparsity, involved in the objective function. A Cholesky factor R can be computed using the fast Cholesky decomposition such that: S = RT R

(14)

Solving an H, using the lower-triangular Cholesky factor matrix RT , can be done as follows: RT H = BZ

(15)

While solving an M, using the upper triangular Cholesky factor matrix R, is performed as below: RM = H

(16)

By exploring the sparse structure in S, (16) gives a rapid way to determine the low-dimension representation Y:

Where 4.

Experiment results

Y = βM

(17)

β = -2ρ ⁄ ||H ||2F

(18)

In this section, while using real hyperspectral data sets, the suggested method was validated in quantitative and qualitative ways. Tested data sets, experimental implementation and results are described below. 4.1. Data sets and performance evaluation To assess the suggested technique yield, we performed the Fast SSSE technique to hyperspectral imagery classification tasks. Afterwards, we co mpared the obtained classification results with three ot her d imension reduction methods, which are: the LE [9], the Fast LE [12] and the SSSE [11]. We used the Matlab language and a laptop with a 2-GHz processor and 4 GB memory to imp lement the tested algorithms. Two real hyperspectral data sets were adopted to evaluate the proposed approach those are the Indian Pines and the Pavia scenes. The two adopted hyperspectral data sets were down loaded fro m [5]. Using the Airborne Visible/Infrared Imag ing Spectrometer (A VIRIS) sensor system, the first data set was taken in June of 1992, over north-western USA. It consists of 145 × 145 pixels, 224 spectral bands remaining in the 0.4-2.5 μm wavelength range with a spectral resolution of 10 n m and a s patial resolution of 20 m. 20 bands correspond to noise and water absorption : [104-108], [150-163], 220 were removed. Hence only 204 bands were adopted. The Indian Pines scene includes 10249 ground truth pixels joined to 16 classes (Alfalfa (46), Corn-N (1428), Corn-M (830), Corn (237), Grass-P (483), Grass-T (730), Grass-P mowed (28), Hay (478), Oats (20), Soybeans-N (972), Soybeans-M (2455), Soybeans-C (593), Wheat (205), Woods (1265), Buildings (386) and Stone (93)). The second data set was obtained by the ROSIS (Reflect ive Optics System Imaging Spectrometer) sensor and captures the university of Pavia in north of Italy. In fact, Pav ia data set is composed of 103 bands consisting of 610 × 340 pixels in the spectral range of 0.43 to 0.86 μm with a spatial resolution of 1,3 m per p ixel. The Pavia image contains 9 classes inco rporate 42776 ground truth pixels (Asphalt (6631), Meadows (18649), Gravel (2099), Trees (3064), Sheets (1345), Soil (5029), Bitu men (1330), Bricks (3682) and Shadows (947)). The two used hyperspectral scenes and their g round truth maps are shown in



Asma Fejjari et al. / Procedia Computer Science 126 (2018) 656–664 Author name / Procedia Computer Science 00 (2018) 000–000

661

Fig.1. Due to its ability to find a global classificat ion solution and to deal with limited training samp les, the SVM (Support Vector Machine) classifier [16] was used during the classificat ion process. We executed only 10% and 1% of each ground truth pixels fro m the Indian Pines and Pavia images, respectively, chosen arbitrarily as the input training samp les. Every classification scheme was reiterate ten times and the average of the classification results was reported to evaluate the classification performance. Classification accuracies, as well as overall accuracy (OA), average accuracy (AA), Kappa coefficient, and computing time were adopted to evaluate the proposed feature extraction method efficiency. The visual outputs are also provided by the classificat ion maps. For the proposed Fast SSSE and the orig inal SSSE feature ext raction techniques, we adopted the same parameters given in [11] s Tr(L) and the reduced dimension n = 50 for th e i.e., σf = σ = 1 , the number of nearest neighbours k = 20, α = 17,78 Indian Pines data set and σf = σs = 1 , k = 20, α = 23,71

Tr(L ) Tr(V)

Tr(V)

and n = 25 for the Pavia scene. For the other tested

approaches (LE and Fast LE), we choose: k = 12 and n = 20 for the two used images. In our experiments, dimension reduction techniques are firstly employed to reduce the input dimension feature then the classifier is adopted. The approach followed, in this paper, is presented in Fig 2. OA =

P ixels correctly predicted The total number of pixels to be predicted

AA = Kappa=

Classification average Number of classes

× 100

× 100

P ixels correctly predicted P ixels correctly predicted + number of c onfusion

(20) ×100

(a) Fig. 1. Pseudo-color image and ground truth map of (a) Indian Pines and (b) Pavia hyperspectral data sets.

Fig. 2. Architecture of the proposed approach.

(19)

(21)

(b)

Asma Fejjari et al. / Procedia Computer Science 126 (2018) 656–664 Author name / Procedia Computer Science 00 (2018) 000–000

662

4.2. Classification results As we mentioned previously, the performance of the proposed feature extraction technique was compared with three other dimension reduction methods: the LE [9], the Fast LE [12] an d the orig inal SSSE [11]. Table 1 and 2 summarize the detailed classification results of all considered methods. 1) Indian Pines data set: As shown in table 1, although the suggested technique enhanced the overall accuracy (OA) by about 20% and 18%, co mpared to LE and Fast LE respectively, SSSE performs best and paves the way for the best classification accuracy. LE has the worst classification rates fo llo wed by Fast LE th en Fast SSSE. While taking into consideration the computing time; experiments have exh ibited that SSSE is the most costly in term of computing time; whereas, Fast LE has the least one followed by the LE technique then the proposed approach. The new linear constraint, integrated into the optimizat ion problem, conserves nearly 55% of co mputing time compared to SSSE, while Fast LE saves about 45% of computing time about the original LE. The best classification accuracies, for the proposed approach, were produced with the following classes: Alfalfa ( 99,97%), Grass-P (99,06%), Grass-PM (99,92%), Oats (99,92%), Wheats (99,74%), stone (99,79%), while the worst classificat ion results were acquired with Corn-N (91,86%) and Soybeans-M (91,06%) classes. 2) Pavia data set: For the second data set, the proposed feature extraction technique can sav e about 9% and 6% of overall accuracy (OA) co mpared to LE and Fast LE respectively. Experimental results have shown that the proposed technique can decrease the computational burden by about 80% compared to SSSE , wh ile Fast LE can save about 71% of co mputing t ime respect to the conventional LE. The orig inal SSSE still has the highest classification accuracies: 98,93% of OA and 94,53% of Kappa. Gravel (98,46%), Sheets (99,78%), Bitumen (99,22%) and Shadows (99,74%) classes gave the best classification accuracy, for the p roposed Fast SSSE, whereas the Meadows class (94,27%) provided the worst one. The classification result maps, for the two tested hyperspectral data set, are shown in Fig.3 and 4. Fro m these figures, we can observe that the Fast SSSE dimension reduction method outperforms the LE and the Fast LE techniques in term of visualization effectiveness, while the original SSSE presents the best visualization display. T able 1. Classification results for the Indian Pines data set Classes

No. of samples

LE

Fast LE

SSSE

Fast SSSE

Alfalfa

46

99,98

99,99

99,99

99,97

Corn-N Corn-M Corn Grass-P Grass-T Grass-P M

1428 830 237 483 730 28

89,00 94,03 98,44 98,83 96,48 99,92

89,83 94,75 98,86 99,02 97,36 99,80

98,76 99,19 100 99,60 99,59 100

91,86 94,64 98,86 99,06 98,58 99,92

Hay Oats Soybeans-N Soybeans-M Soybeans-C

478 20 972 2455 593

99,21 99,48 95,27 84,09 94,82

99,55 99,72 96,09 87,49 94,35

100 99,81 99,01 99,63 98,39

98,83 99,92 96,89 91,06 96,08

Wheat Woods Buildings Stone OA (%) AA (%) Kappa (%) Computing time (S)

205 1265 386 93

99,38 96,28 96,39 99,89 68,92 96,37 65,17 28,35

99,38 98,46 96,87 99,85 70,52 96,96 68,66 15,67

100 100 99,81 99,99 95,84 99,29 95,53 74,63

99,74 96,97 97,20 99,79 88,68 97,46 84,66 33,22



Asma Fejjari et al. / Procedia Computer Science 126 (2018) 656–664 Author name / Procedia Computer Science 00 (2018) 000–000

663

T able 2. Classification results for the P avia data set. Classes

No. of samples

LE

Fast LE

SSSE

Fast SSSE

Asphalt

6631

94,58

95,27

99,90

96,97

Meadows Gravel T rees

18649 2099 3064

88,07 97,69 96,16

91,01 98,06 96,58

99,56 99,63 99,73

94,27 98,46 97,90

Sheets Soil Bitumen Bricks Shadows

1345 5029 1330 3682 947

99,65 95,87 99,10 96,48 99,50

99,49 96,96 99,35 96,84 99,49

100 99,59 99,98 99,54 99,94

99,78 97,70 99,22 97,75 99,74

83,56 96,35 77,84 619,69

86,52 97,01 81,54 179,67

98,93 99,76 94,53 1544,6

92,89 97,98 89,91 308,38

OA (%) AA (%) Kappa (%) Computing time (S)

(a)

(b)

(d)

(c)

Fig. 3. Indian Pines classification maps, obtained by: (a) the LE technique, (b) the Fast LE technique, (c) the SSSE technique and (d) the proposed approach.

(a)

(b)

(c)

(d)

Fig. 4. Pavia classification maps, obtained by: (a) the LE technique, (b) the Fast LE technique, (c) the SSSE technique and (d) the proposed approach.

5. Conclusions and future works A fast variant of the Spatial Spectral Schroedinger Eigenmaps (SSSE) technique was introduced in this paper. The proposed approach addresses to lessen the SSSE co mputational burden by including a new linear constraint, in the quadratic optimization prob lem, instead of the quad ratic one. A fast sparse Cholesky decomposition was used to

Asma Fejjari et al. / Procedia Computer Science 126 (2018) 656–664 Author name / Procedia Computer Science 00 (2018) 000–000

664

resolve the newly fo rmulated optimization problem. The new technique has been suggested for hypers pectral classification tasks. In order to evaluate the performance of the new suggested technique, Indian p ines and Pavia hyperspectral scenes were adopted in this study. The experimental results have shown that the proposed approach can save about 55% of the time processing for the first data set and 80% for the second one, according to the conventional SSSE technique as well as good classification accuracy. In future works, we would like to imp rove the classification performance of the proposed approach by using adaptive strategies to create the adjacency graph such as the graph growing strategy [10]. We would also try to use the symmetric diagonally dominant (SDD) linear system [12], [18] in order to speedily implement the SSSE algorithm. Acknowledgment This work was supported and financed by the Ministry of Higher Education and Scientific Research of Tunisia. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

Su Jinya, Dewei Yi, Cunjia Liu, Lei Guo, Wen-Hua Chen. Dimension Reduction Aided Hyperspectral Image Classification with a Smallsized T raining Dataset: Experimental Comparisons. Sensors 2017; 17(12): 1-20. Sellami Akrem, Imed Riadh Farah. High-level hyperspectral image classification based on spectro-spatial dimensionality reduction. Spatial Statistics 2016; 16: 103-117. Koonsanit Kitti, Chuleerat Jaruskulchai, Apisit Eiumnoh. Band Selection for Dimension Reduction in Hyper Spectral Image Using Integrated Information Gain and Principal Components Analysis T echnique. International Journal of Machine Learning and Computing 2012; 2(3): 248-251. Wang Shuangting, Chunyang Wang. Research on dimension reduction method for hyperspectral remote sensing image based on global mixture coordination factor analysis. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2015; XL-7/W4: 159-167. Computational Intelligence search group site. Hyperspectral Remote Sensing Scenes. http://www.ehu.eus/ccwintco/index.php?title= Hyperspectral_Remote_Sensing_Scenes. [Accessed: 05- September- 2017]. Ren Jinchang, Jaime Zabalza, Stephen Marshall, Jiangbin Zheng. Effective feature extraction and data reduction in remote sensing using hyperspectral imaging. IEEE Signal Processing Magazine 2014; 3(4): 149-154. Sun Weiwei, Avner Halevy, John J.Benedetto, Wojciech Czaja, Chun Liu, Hangbin Wu, Beiqi Shi, Weiyue Li. UL-Isomap based nonlinear dimensionality reduction for hyperspectral imagery classification. ISPRS Journal of Photogrammetry and Remote Sensing 2014; 89: 25–36. Huang Min, Qibing Zhu, Bojin Wang, Renfu Lu. Analysis of hyperspectral scattering images using locally linear embedding algorithm for apple mealiness classification. Computers and Electronics in Agriculture 2012; 89: 175–181. Hou Biao, Xiangrong Zhang, Qiang Ye, Yaoguo Zheng. A novel method for hyperspectral image classification based on Laplacian Eigenmap pixels distribution-flow. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2013; 6(3): 1602– 1618. Zhai Yongguang, Lifu Zhang, Nan Wang, Yi Guo, Yi Cen, Taixia Wu, Qingxi Tong. A Modified Locality Preserving Projection approach for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters 2016; 13(8): 1059–1063. D. Cahilla Nathan, Wojciech Czajab, David W. Messingerc. Schroedinger Eigenmaps with nondiagonal potentials for spatial-spectral clustering of hyperspectral imagery. Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX 2014; 9088. Vepakomma Praneeth, Ahmed Elgammal. A fast algorithm for manifold learning by posing it as a symmetric diagonally dominant linear system. Applied and Computational Harmonic Analysis 2016; 40(3): 622-628. Abbasifard Mohammad Reza, Bijan Ghahremani, Hassan Naderi. A Survey on nearest neighbor search methods. International Journal of Computer Applications 2014; 95(25): 39-52. A. Davis Timothy, William W. Hager. Dynamic supernodes in sparse Cholesky update/downdate and triangula solves. ACM Transactions on Mathematical Software 2009; 35(4). Chen Yanqing, Timothy A. Davis, William W. Hager, Sivasankaran Rajamanickam. CHOLMOD, supernodal sparse Cholesky factorization and update/downdate. ACM Transactions on Mathematical Software 2008; 35(3). Chang Chih-Chung, Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2011; 2(3): 1–27. Khodr Jihan, Rafic Younes. Dimensionality reduction on hyperspectral Iimages: a comparative review based on artificial datas. In: Proceedings of the 4th International Congress on Image and Signal Processing, Shanghai, China ; 2011, p.1875-1883. Koutis Ioannis, Gary L. Miller, Richard Peng. Approaching optimality for solving SDD systems. In: Proceedings of the Fifty-First Annual IEEE Symposium on Foundations of Computer Science; 2010, p. 337–354.