Content uploaded by Dr.Pravin R. Kshirsagar
Author content
All content in this area was uploaded by Dr.Pravin R. Kshirsagar on Jan 09, 2017
Content may be subject to copyright.
International conference on Signal Processing, Communication, Power and Embedded System (SCOPES)-2016
Prediction of Neurological Disorders using
Optimized Neural Network
Pravin R. Kshirsagar,
Research Scholar,
Department of Electronics Engineering,
Rajiv Gandhi College of Engineering, Research and
Technology, Chandrapur (M.S.), India,
pravinrk88@yahoo.com
Prof. Dr. Sudhir G. Akojwar,
Senior Member IEEE, Associate Professor ,Head
Department of Electronics & Telecommunication
Engineering, Government College Engineering ,
Chandrapur (M.S)-442403,India,
sudhirakojwar@gmail.com
Abstract— In the earth there is distressing number of people who
suffer from neurological disorders. Electroencephalogram EEG
signal are chaotic time series signals and tends to change rapidly
with the patient condition. From normal to severe conditions the
nature of signals has drastic difference and with change in
amplitude as well as frequencies. Prediction of these signals in the
early stage is mere a complex task. The work is focused on
predicting individual state signal. The Generalized Regression
neural networks (GRNN) variant of Radial basis function neural
network (RBFNN) is best at the work but require a good choice
of its spread factor. Choosing accurate spread factor is not a
simple work, and requires experiments to be carried out, which is
time consuming and tedious. The search of the particles in the
swarm is opted for finding the spread factor for GRNN. The
combination of particle swarm optimization (PSO) with GRNN
greatly helped in improving prediction accuracy of GRNN to
various neurological disorders.
Keywords— Electroencephalogram (EEG); Particle swarm
optimization (PSO); search space; radial basis function neural
network; generalized regression neural network; spread and
prediction.
I. INTRODUCTION
Neurological disorders is one of the most common
disorders of the nervous system and affects people of all ages,
races and ethnic backgrounds. In many cases, there may be no
detectable cause for neurological disorders. There are number
of neurological disorders syndromes that are defined based on
a unique combination of symptoms [7]. All seizures are caused
by abnormal electrical disturbances in the brain [8].
Generalized tonic-clonic seizure is one type of seizure that
involves the entire body. It is also called grand mal seizure.
The terms seizure, convulsion, or neurological disorders are
most often associated with generalized tonic-clonic seizures.
Partial (focal) seizures occur when this electrical activity
remains in a limited area of the brain. The seizures can
sometimes turn into generalized seizures, which affect the
whole brain. This is called secondary generalization. Partial
seizures can be divided into:
Simple seizures which are not affecting awareness or
memory, Complex seizures which are affecting awareness or
memory of events before, during, and immediately after the
seizure, and affecting behavior. Slow-wave sleep --This is a
rare neurological disorders syndrome [9]. When it does
happen, it usually appears in mid-childhood. It is also called
‘continuous spike-wave of slow sleep’. The cause of this
syndrome is not known. It usually happens in children who
already have neurological disorders. Brain death is the
complete and irreversible loss of brain function (including
involuntary activity necessary to sustain life). Brain death is
used as an indicator of legal death in many jurisdictions, but it
is defined inconsistently. Various parts of the brain may keep
living when others die, and the term "brain death" has been
used to refer to various combinations. For example, although a
major medical dictionary says that "brain death" is
synonymous with "cerebral death" (death of the cerebrum)
[10] [11] [12]. This work is focused to predict the future
samples of epileptic patient under various states regarding his
brain activity.
A. Generalized Regression Networks
Generalized Regression Networks as the name suggests is
based on standard regression called as kernel regression are
variants of radial basis function networks [18][19]. They are
used for function approximation or estimation of continues
variables. They do not use any iterative training or learning
algorithms as feed forward neural networks does. The network
structure is similar to that of Radial basis function (RBF)
network except for the second layer which is slightly different.
The second layer calculates the dot product of inputs and
weights and then is normalized. The figure 1 below shows the
generalized regression network with a special second layer,
Fig.1. Generalized Regression Network
International conference on Signal Processing, Communication, Power and Embedded System (SCOPES)-2016
The first layer generates number of neurons equals to
number of inputs P, layer weights are set to transpose of input
(P’), bias b is set to column vector 0.8326/SPREAD, and the
distance from input to neuron weight must be 0.5. The special
or the second layer (different from RBF) calculates the dot
product of layer weights and input vector (output of network
first stage) and are normalized by the sum of the elements of
the inputs. This layer also has as many neurons as
input/target/vectors, but the layer weights are set to target T
[20].
The spread factor plays crucial role in GRNN. When the
spread is large, it leads to a large area around the input vector
and layer 1 neurons will respond with significant outputs.
Therefore when the spread is small, the radial basis function is
very steep. Hence the neuron with the weight vector closer to
the input will have a much larger output as compared to other
neurons. The network will follow the target vector associated
with the nearest design input vector. Neurons response
depends on the slope of the radial basis function. Hence when
the spread is increased its slope becomes smoother to initiate
most of the neurons to respond to an input vector. The network
seems to take the weighted average between target vectors
whose design input vectors are nearer to the new input vector.
Further increase in the spread will accompany more neurons to
respond making the network function more smoother
[13][14][15][16].
The accuracy for the above networks is controlled by one,
two or three parameters:
1. The number of radial basis functions or hidden units,
2. Centers of the hidden units,
3. The spread factor.
During the learning stage the RBF centers in the hidden
layer are randomly selected from some input dataset. A
stochastic gradient approach is then used to estimate weights
between hidden and output layer. The main problem with this
method is to find how many numbers of center will be
required to cover the input vector space adequately. Also the
training algorithm has the tendency to get stuck at local
minima or local maxima.
B. Particle Swarm optimization to SPREAD factor for
Generalized Regression Network
Generalized Regression Network was taken for finding an
optimum value for the spread factor so that accurate results are
obtained when tested over some real applications and
benchmark datasets[25][26]. Finding SPREAD is a tedious job
and time consuming which may become cumbersome in some
cases. Normally a SPREAD constant is taken in the range [0
1]. But experimentally a 2 digit precision can be guessed after
trial and error method. A compromise is required for time
complexity in experimentally finding an optimum SPREAD
and the permissible mean squared error. PSO here finds a
SPREAD value by automatically adjusting the search space
and converges early for the required mean squared error. The
precision again is not the problem for the PSO optimized
network. It can exhaust all the double precision data type to
find the SPREAD factor
II. PROPOSED METHODOLOGY
The steps for implementation for the optimization of
GRNN are listed below the various EEG datasets used in this
work is arranged as
Ptrain – input vector for training the network
Ttrain – the target vector for the training vector
Ptest – the test vector
Ttest – the target vector for the test vector.
These are stored in a .mat file for each individual dataset. The
same is loaded at the beginning of the program file.
1. The number of particles ranged from 20-30. In most of
the cases 20 was sufficient.
2. For GRNN network, only the spread factor is the
crucial factor. The particles position corresponding to
spread is randomly initialized.
3. Now the GRNN network was trained using particle
position value as spread for 10-20 iterations irrespective
of the goal that is minimum MSE.
4. The solution corresponding to the best particle that is
best spread of the network is saved.
5. Now the same network is used with the best spread
factor obtained by PSO.
6. The network is then trained with the input vector and its
target vector.
7. The MSE is then calculated for the training samples.
8. The trained network is then tested with the test samples.
The obtained values are then compared with the actual
target and MSE is evaluated.
9. The error graph and the actual response for both the
train and test data are plotted.
10. The MSE is then calculated for both train and the test
data.
III. THE DATASET
Data from a single epileptic patient was acquired in
various health conditions at central India institute of medical
sciences, Nagpur, Maharashtra, India. The conditions taken
into account were,
1. Normal 2. Generalized neurological disorder 3. Focal
neurological disorder 4.Slow wave 5. Brain Death
The sampling frequency was 256 Hz and data was
acquired over 16 EEG channels. Total 2560 samples were
recorded over single channel, which is for 10 seconds, on
average except for brain death, which was recorded over 3580
seconds and consists of 4,58,240 samples. The current
predicted value was calculated from 5 past samples as,
W (t) = w (t-1) w (t-2) w (t-3) w (t-4) w (t-5) ---------- (1)
The above values were computed for dataset starting from
sample 6 to 2000 for training sample and testing samples.
Therefore total samples acquired were 1995. Out of which
1500 samples were used for training and remaining 495 were
used for testing. The data was then normalized by dividing it
by the maximum value. The data array so obtained for training
and testing was 5x1500 and 5x495 respectively. The following
waveforms shows the actual output with output obtained by
PSO-GRNN and Error plot for PSO-GRNN in prediction for
different conditions of the patient i.e. for normal state, under
generalized neurological disorder, focal neurological disorder,
slow wave neurological disorder and brain death.
International conference on Signal Processing, Communication, Power and Embedded System (SCOPES) -2016
IV.RESULTS
Result shows PSO based Generalized Regression neural
network over different state of neurological disorders. Table
1.shows calculated MSE and accuracy of different state of
neurological disorders. Achieved MSE by proposed algorithm
is 0.003786 and achieved accuracy is 99.62
0 50 100 150 200 250 300 350 400 450 500
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Samples with time
Values
Comparison by optimising Spread
Expected Output
BY PSO spread
Fig. 2. Comparison of act ual output with output obtained by PSO-GR NN for
normal patient
Fig. 3. Error plot for PSO-GRNN in prediction for normal patient
0 50 100 150 200 250 300 350 400 450 500
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Samples with time
Values
Comparison by optimising Spread
Expected Output
BY PSO spread
Fig. 4. Comparison of actual output with output by PSO-GRNN for
generalized neurological disorder patient
Fig. 5.Error plot f or PSO-GRNN for generalized neurological disorder
patient in prediction
0 50 100 150 200 250 300 350 400 450 500
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Samples with time
Values
Comparison by optimising Spread
Expected Output
BY PSO spread
Fig. 6. Comparison of actual output with output obtained by PSO-GRNN for
focal neurological disorder patient
Fig. 7.Error plot for P SO-GRNN for focal neurologi cal disorder patient in
prediction
0 50 100 150 200 250 300 350 400 450 500
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Samples with time
Values
Comparison by optimising Spread
Expected Output
BY PSO spread
Fig. 8.Comparison of actual output with output obtained by PSO-GRNN for
slow wave patient
Fig. 9. Error plot for PSO-GRNN for slow wave patient in prediction
0 50 100 150 200 250 300 350 400 450 500
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Samples with time
Values
Comparison by optimising Spread
Expected Output
BY PSO spread
Fig. 10. Comparison of actual output with output by PSO-GRNN for Brain
Death patient in prediction
International conference on Signal Processing, Communication, Power and Embedded System (SCOPES) -2016
Fig. 11. Error plot for PSO-GRNN for Brain Death patient in prediction
TABLE I. Mean squared error, PSO obtained spread factor and accuracy for
various neurological disorders.
V.CONCLUSION
The proposed RBF+PSO based prediction approach shows
good potential results by optimizing the parameters i.e. MSE,
spread factor.PSO works in the worst case also, not accurately,
but can provide a guess to the network size and epochs in the
next run. By properly setting the minimum and maximum
limits to the particle positions and velocities, the network can
converge early and efficiently. PSO based NN works better
when the train and test data are priory normalized, with a
common factor (Maximum Value in the set). In Radial basis
network, applications showed that finding such a precise value
of spread factor is not easy. Only PSO can find the better one
in little iterations.
Acknowledgment
The authors would like to acknowledge Dr. Neeraj Baheti,
Consultant in Neurology and Neurological disorders
Specialist, Department of Neurological Sciences, Central India
Institute of Medical Science, Nagpur (India) for providing the
necessary information to carry out this work.
References
[1] Srinivasan V., Eswaran C. and Sriraa m N., “Approximate Entropy-Based
Epileptic EEG Detection Using Artificial Neural Networks”, IEEE
Transactions on Information Technol ogy i n Biomedicine, volume
11, Issue 3, pp. 2 88-295, 2007.
[2] Ebersole JS, Pedley TA, “ Current practice of cl inical
electroencephalography”, third edition, Lippinc ott Williams and
Wilkins, 2003.
[3] Turkey N. Alotaiby, Saleh A. Alshebeili, Tariq Alshawi, Ishtiaq
Ahmad and Fathi E, “EEG seizure detection and predic tion algorithms: a
survey”, EURASIP, Journal on Advances in Signal Processing, Springer,
pp. 1-21, 2014.
[4] Samiee, K., Kovacs P. and Gabbouj M., “ Epileptic Seizure Classification
of EEG Time-Series Usin g Rati onal Discrete Short-Time Fourier
Transform”, IEEE transacti ons on Biomedical Engineering, volume 62,
issue 2, pp. 541-552, 2014
[5] R. G. Andrzejak, K. Lehnertz, F. Mormann, C. Rieke, P. David and C. E.
Elger, "Indications of nonlinear deterministic and finite-dimensional
structures in time series of brain el ectrical act ivity: Dependence on
recording region and brain state", PHYS. Rev. E, VOLUME 6, pp. 1 -8,
2001.
[6] Lesca G, Rudolf G, Labal me A, Hirsch E, Arzimanoglou A, Genton
P, Motte J, de Saint Martin A, Valenti MP, Boulay C, De Bellescize J,
Kéo-Kosal P, Boutry-Kryza N, Edery P, Sanlaville D, Szepetowski P.,”
Epileptic encephalopathies of the Landa u-Kleffner and continuous spike
and waves during slow-wave sleep types: genomic dissection makes the
link with autism”, volume 9, pp - 1526-1538, 2012.
[7] Nabeel Ahammad, Thasneem Fathima and Paul Joseph, “Detection of
Epileptic Seizure Event and Onset Using EEG”, BioMed Research
International, Hindavi, Volume 2014, pp. 1-7, 2014.
[8] H. Qu and J. Gotham, “ A patient-specific algorithm for the detection of
seizure onset in l ong- term EE G monitoring: possible use as a war ning
device,” IEEE Transactions on Biomedical Engineering, v ol. 44, no. 2,
pp. 115–122, 1997
[9] Saadat Nasehi and Hossein Pourghassem, “A Novel Epileptic Seizure
Detection Algorithm Based on Analysis of EE G and ECG Signals Using
Probabilistic Neural Net work”, Australian Journal of Basic and Applied
Sciences, volume 5, no. 12, pp. 308-315, 2011.
[10] A. T. Tzallas, M. G. Tsipouras, and D. I. Fotiadis, “Automatic Seizure
Detection Based on Time-Frequenc y Anal ysis and Artificial Neural
Networks”, Computati onal intelligence and Neuroscience, NCBI,
volume 2007.
[11] McGrogan N., “Neural network detection of epileptic seizures in the
electroencephalogra m”, [Ph.D. thesis] Oxford, UK: Oxford Unive rsity;
Feb. 1999.
[12] Calli stus O. Mgbe, Joseph M. Mom and Gabriel A. Igwue, “Performance
Evaluation of Generalized Regression Neural Network Path loss
Prediction Model in M acrocellular Environment”, Journal of
Multidisciplinary Engineering Science and Technology, volume 2, Issue
2, pp. 204-208, 2015.
[13] Dirk Tomandl and Andreas Sc hober, “A modifi ed GRNN with new
efficient training algorithms as a Black box – tool for data analysis”,
Elsevier, Neural Net works, volume 14, pp. 1023-1034, 2001.
[14] D. F. Specht, “A general regression neural network,” IEEE Tra nsactions
on Neural Net works, Volume 2, issue 6, pp - 568–576, 1991.
[15] Shaikh A bdul Hannan, R. R. Manza, R. J. Ramteke,” General ized
Regression Neural Network and Radi al Basis Function for Heart Disease
Diagnosis”, International Journal of Computer Applications, Volume 7,
issue 13, pp. 7-13, 2010.
[16] Azuaje, F., Dubit zky, W. , Lopes, P., Black, N., & Adamsom, K.,
“Predicting coronary disease risk based on short-term RR interval
measurements: A neural network approach”, Artificial Intelligence in
Medicine, Volume 15, pp - 275–297, 1999.
[17] David P. Aguilar,” A radial basis neural network for the analysis of
transportation data”, thesis submitte d at University of South Florida
Scholar Commons, 2004.
[18] Martin T. Hagan, Howard B. Demuth, Mark Beale, “Neural Network
Design”, Cengage Learning, 2008.
[19] S. N. Shivanandan, S. Sumathi, S. N. Deepa, “Introduction to Neural
Networks using Matl ab 6.0”, Tata McGraw Hill Education Private Ltd.,
2010.
[21] Russell Eberhart a nd James Kennedy, “A New Opti mizer Using Particle
Swarm Theory”, sixth international symposium on Micro Machine and
Human Science, IEEE, pp. 39-43, 1995.
[20] Kennedy, I., Eberhart, R., “ Particle Swarm Optimization”, International
Conference on Neural Networks, Perth, Australia, IEEE, volume 4, pp.
1942-1948, 1995.
Sr.
No. Subject State M SE
Spread
factor by
PSO
Accuracy
1. Normal
0.0016367
0.077536 99.8363
2.
General
Neurological
disorder
0.0051768 0.096843 99.4823
3.
Focal
Neurological
disorder
0.006942 0.068344 99.3058
4. Slow Wave 0.0010115 0.012214 99.8989
5. Brain Death 0.0041671 0.10883 99.5833
International conference on Signal Processing, Communication, Power and Embedded System (SCOPES) -2016
[21] Shi, Y., Eberhart, R., “Parameter Selection in Par ticle Swarm
Optimization”, Proceedings of the Seventh Annual Conference on
Evolutionary Programming, pp. 591-601, 1998.
[22] R.C. Eberhart and Y. Shi.,”Comparing inertia weights and c onstriction
factors in particle swarm opti mization”, Proceedings of the 2000
Congress on Evolutionary Computation, volume 1, pp. 84-88, 2000.
[23] M. Clerc and J. Kennedy,” The particle swarm - explosion, stabi lity, and
convergence in a multidimensional complex space”, IEEE Transacti ons
on Evolutionary Computation, volume 6, pp - 58 -73, 2002.
[24] S. Kiranyaz, J. Pulkkinen, A. Yildirim and M. Gabbouj "Multi -
dimensional particle swarm optimization in dynamic
environments", Expert System Applications, volume 38, no. 3, pp.
2212 -2223, 2011.
[25] Pravin Kshirsagar and Sudhir Akojwar, “ Hybrid Heuristic Optimization
for Benchmark Data sets”, International Journal of Computer
Application (0975-8887),Vol.146-No.7, July 2016 .
[26] Pravin Kshirsagar and Dr. Sudhir Akoj war, Novel Approach for
Classification and Prediction of Non Linear Chaotic Databases,
International Conference on Electrical, Electronics, and Optimization
Techniques, Ma rch 2016.