ArticlePDF Available

EEG feature extraction for classification of sleep stages

Authors:

Abstract and Figures

Automated sleep staging based on EEG signal analysis provides an important quantitative tool to assist neurologists and sleep specialists in the diagnosis and monitoring of sleep disorders as well as evaluation of treatment efficacy. A complete visual inspection of the EEG recordings acquired during nocturnal polysomnography is time consuming, expensive, and often subjective. Therefore, feature extraction is implemented as an essential preprocessing step to achieve significant data reduction and to determine informative measures for automatic sleep staging. However, the analysis of the EEG signal and extraction of sensitive measures from it has been a challenging task due to the complexity and variability of this signal. We present three different schemes to extract features from the EEG signal: relative spectral band energy, harmonic parameters, and Itakura distance. Spectral estimation is performed by using autoregressive (AR) modeling. We then compare the performance of these schemes with the view to select an optimal set of features for specific, sensitive, and accurate neuro-fuzzy classification of sleep stages.
Content may be subject to copyright.
Abstract Automated sleep staging based on EEG signal
analysis provides an important quantitative tool to assist
neurologists and sleep specialists in the diagnosis and
monitoring of sleep disorders as well as evaluation of
treatment efficacy. A complete visual inspection of the
EEG recordings acquired during nocturnal
polysomnography is time consuming, expensive, and
often subjective. Therefore, feature extraction is
implemented as an essential preprocessing step to
achieve significant data reduction and to determine
informative measures for automatic sleep staging.
However, the analysis of the EEG signal and extraction
of sensitive measures from it has been a challenging task
due to the complexity and variability of this signal.
In this paper we present three different schemes to
extract features from the EEG signal: Relative Spectral
Band Energy, Harmonic Parameters, and Itakura
Distance. Spectral estimation is performed by using
Autoregressive (AR) modeling. We then compare the
performance of these schemes with the view to select an
optimal set of features for specific, sensitive, and
accurate neuro-fuzzy classification of sleep stages
.
Keywords
Sleep staging; EEG signal processing;
feature extraction; biomedical signal processing;
harmonic parameters; Itakura distance; sleep apnea.
I. INTRODUCTION
Electroencephalogram (EEG) is perhaps the most
important tool in studying sleep and sleep-related disorders
such as sleep apnea or insomnia. Sleep states are comprised
of two general stages: rapid eye mo vement (REM) and non-
rapid eye movement (NREM). NREM is in turn subdivided
into four stages: 1, 2, 3, and 4 according to Rechtschaffen
and Kales (RK) sleep scoring standard [1]. The awake state
is not formally a sleep state, but it is considered a state for
the purpose of sleep scoring.
Brain activity is divided into four main rhythms. Beta
waves are defined as low voltage (around 5 µV) and high
frequency waves (14 to 30 Hz, sometimes as high as 50 Hz).
Alpha waves, which occur during relaxed states, are regular
rhythms of 8 to 13 Hz with higher amplitudes than beta
waves. Theta waves are typically of even greater amplitude
and slower frequency than alpha waves. Their frequency
range is normally between 4 and 7 HZ. Delta waves, slowest
EEG rhythms, generally have the highest amplitude EEG
waveforms observed (about 300 µV) with all the frequencies
below 3.5 Hz.
Beta waves represent arousal, while alpha waves
represent non-arousal. The awake state is more related to the
beta waves than alpha waves. Stage 1, which is considered
to be a midway state between waking and sleep, is further
linked to the alpha and theta waves. Stage 2 is manifested by
the low voltage waves of stage 1 mixed with what are
known as the K-complex (a sharp, high voltage transient
wave which occurs spontaneously) and sleep spindle waves
(bursts of waves having a frequency of 12 to 15 Hz)
. Stage
3 sleep begins when low voltage background waves that
distinguish the spindles are replaced by high amplitude, low
frequency delta waves
. In the deep stage 4 sleep, spindles
drop out and the EEG signal consists almost of delta waves.
Table 1 shows the relationship among sleep stages and brain
activity [2, 3].
Table 1. Relationship among sleep stages and EEG rhythms.
Stage ECG Brain Activity Waveform Contained
Awake Beta, Alpha
Stage 1 Alpha, Theta
Stage 2 Alpha, Theta , K complex , spindle waves
Stage 3 Delta, spindle waves
Stage 4 Delta
During REM sleep, the brain activity is reversed from
Stage 4 to a pattern similar to stage 1. In general, REM sleep
is associated with visual dreaming.
II. MATERIALS AND METHODS
A. Data Collection
The data acquired from one volunteer subject with no
sleep disorders who was referred to our accredited Sleep
laboratory (Consultants Inc., Fort Worth, Texas) was used
for this study. A total of 7 hours and 4 minutes of sleep data
were recorded for this subject. The 10-20 Standard electrode
placement system was used for EEG recording. Specifically
recordings between C
1
and A
2
positions were used for this
study. The EEG signals were amplified using a Nihon
Kohden polygraph (Irvine, CA) and the data acquisition was
achieved using a Telefactor System (Conshocken, PA) for
polysomnography. One EEG channel was stored using a
EEG FEATURE EXTRACTION FOR CLASSIFICATION OF SLEEP STAGES
E Estrada, H Nazeran, P Nava, K Behbehani
*
, J Burk
**
, and E Lucas
**
Department of Electrical and Computer Engineering, The University of Texas at El Paso, El Paso, Texas, USA
* Biomedical Engineering Program, The University of Texas at Arlington, Arlington, Texas, USA
** Sleep Consultants Inc., Ft Worth, Texas, USA
Email: nazeran@ece.utep.edu
0-7803-8439-3/04/$20.00©2004 IEEE
196
Proceedings of the 26th Annual International Conference of the IEEE EMBS 
San Francisco, CA, USA • September 1-5, 2004
sampling frequency of 1000 Hz. An experienced sleep
specialist blind to the objective of this study scored the EEG
epochs of 30 seconds in duration based on the RK standard
method. Table 2 shows the percentage and the number of
scored segment for this patient. This information was saved
by the Sleep Laboratory into a binary file.
Table 2. Polysomnogram Information.
Stage 30 second scored epochs Percent %
Awake 185 21
Stage 1 52 6
Stage 2 314 36
Stage 3 84 10
Stage 4 128 15
REM 86 10
For simplicity and further processing this file was
restructured into two principal vectors: one containing the
EEG information (with F
s
= 1000 Hz), and the other
containing the Sleep Stage information (1 unit for
occurrence of each 30 seconds epoch of EEG).
B. EEG Preprocessing
The complete EEG vector was processed using a sixth
order Butterworth bandpass filter with cutoff (corner)
frequencies of 0.5 - 50 Hz. A zero-phase digital filter was
realized by filtering the EEG signals in both forward and
reverse directions resulting in its filtering by a 12
th
order
filter. EEG signals were then decimated by a factor of 10
resulting in signals with a new sampling frequency of 100
Hz.
C. Feature Extraction
For implementation of the feature extraction schemes
we used the following procedure to obtain the statistics for
each measure. First, we partitioned the EEG signal into
different epoch lengths considering the fact that the EEG
signals were scored in segments of 30-second epochs. In
addition, to track the sleep stage information provided by
the sleep specialist, partitioning was performed by taking
new epoch lengths as sub-partitions of 30 seconds. These
sub-partitions provided the flexibility to minimize the
probability of having 30-second segments with more than
one sleep stage present. Second, after running the feature
extraction algorithm, the extracted measures were collected
into different groups using the annotations in the sleep stage
vector. This allowed us to compute the mean, standard
deviation, maximum, and minimum values for each sleep
state.
Relative spectral energy band and harmonic Hjorth
methods [5] were used for feature extraction. These
measures required the computation of the power spectral
density of the EEG signals. This task could be achieved by
using parametric or non-parametric spectral modeling
methods. Autoregressive (AR) modeling was selected as it
provides smoother, more accurate, higher resolution spectra
of the EEG signals and the calculation of Itakura distance is
based on AR parameters. The disadvantage of using this
method is due to the need for selection of an appropriate
model order “p”. This selection could be based on a priori
knowledge or on different criteria (e.g. Akaike Information
Criterion or Minimum Description Length Criterion) [4].
C1. Autoregressive Spectral Estimation of EEG Signals
The autoregressive AR (p) process is an especial case of
an Autoregressive moving average ARMA (p,q) process
when q=0 [4]. The AR (p) process is generated when unit
variance white noise w(n) is passed through an all pole filter
of the form:
=
+
=
p
k
k
p
zka
b
zH
1
)(1
)0(
)(
(1)
Also the autocorrelation sequence of this process satisfies
the Yule-Walker equations:
=
=+
p
l
xpx
kblkrlakr
1
2
0)0()()()(
(2)
Hence, solving for these equations we can obtain the a
p
(l)
coefficients. b(0) can be found as follows:
=
+=
p
k
xpx
krkarb
1
2
)()()0()0( (3)
Using the estimates of the model coefficients, it is
possible to estimate the Power Spectrum:
2
1
2
2
)(1
)0(
)()(
=
=
+
=
p
k
jkw
p
jw
epoch
ez
eka
b
epzH
jw
(4)
where the P(f) is defined as follows:
)(
ˆ
)(
ˆ
2 fj
epoch
ePfP
π
= (5)
C2. Relative Percent Spectral Energy Band
For this analysis we computed the total power content
(TPC) of P(f), from 0.5 to 45 Hz. P(f) was divided into
seven different energy bands, and the respective power
energy bands (PEB) were calculated. The relative percent
spectral energy band (RPEB) was then expressed as:
100x
TPC
PEB
RPEB = (6)
197
Table 3 shows the partition of frequency range for each bin
or energy band [6].
Table 3. Spectral Energy Bands of EEG waves.
Band Bandwidth (Hz)
Delta 1 0.5-2.5
Delta 2 2.5-4
Theta 1 4-6
Theta 2 6-8
Alpha 8-12
Beta 1 12-20
Beta 2 20-45
C3. Harmonic Parameters
The harmonic parameters [5, 6], which are the
frequency versions of the Hjorth parameters are: the center
frequency, the bandwidth, and the value at the central
frequency. These parameters are defined as follows:
∫∫
=
H
L
H
L
f
f
f
f
c
dffPdfffPf )(/)( (7)
∫∫
=
H
L
H
L
f
f
f
f
c
dffPdffPfff )(/)()(
2
σ
(8)
)(
cc
fPPf = (9)
where, f
L
and f
H
are set as 0.5 and 45 Hz, respectively. Since
the computation of P(f) involves discrete values, the above
formulas were approximated using summations as follows:
∑∑
==
=
H
L
H
L
f
ff
f
ff
c
fPfPff )(
ˆ
/)(
ˆ
(10)
∑∑
==
=
H
L
H
L
f
ff
f
ff
c
fPfPfff )(
ˆ
/)(
ˆ
)(
2
σ
(11)
).'(
ˆ
cc
fPPf = (12)
In the above formulas, the f index spans from 0.5 to 45 with
increments of 0.5 Hz. The f
c
’ is the closest f index value to f
c.
C4. Itakura Distance
Itakura distance is used widely in speech processing
applications to measure the distance between 2 AR
processes [7, 8]. Here the Itakura distance was used to
measure the similarly of a base line EEG epoch (Awake,
Stage1, Stage2, Stage 3, Stage 4, REM) with the rest of the
epochs in the EEG vector. If we let the baseline epoch x[n]
be an AR process given by a
x
=[1-a
1
-a
2
…..-a
p
] and the
segment y[n] to be compared to it given by a
y
= [1 -a
1
-a
2
…..-a
p
], then the minimum square error (MSE) for the
baseline process is:
xx
T
xxx
apRaMSE )(
,
= (14)
where the R
x
(p) is the autocorrelation matrix for the baseline
epoch of size p +1. Similarly the MSE of the other processes
passing through the baseline model will be:
yx
T
yyx
apRaMSE )(
,
= (15)
The Itakura distance of the baseline to the other epochs is
defined as:
)/log(
,,
,
xxyxI
MSEMSEd
yx
= (16)
Furthermore, an analysis of how well y[n] is modeled
via the AR parameters of x[n] can be done, thus the new
Itakura distance is:
)
)(
)(
log()log(
,
,
,
yy
T
y
xy
T
x
yy
xy
I
apRa
apRa
MSE
MSE
d
xy
== (17)
Combining (16) and (17) we obtain the symmetric Itakura
distance as:
()
xyyxyx
III
ddd
,,,
2
1
' +=
(18)
III. RESULTS
Using the above mentioned features, a matrix of 11
features by 2,547 (10-second length) segments were
extracted. A total of 25,470 measures are now available to
be fed into a neuro-fuzzy system. Figure 1 shows the mean
Itakura distance between the sleep stages and the baseline
epoch (taken as the Awake state).
Figure 1. Mean Itakura Distance
198
Clearly the Itakura distance shows that when the subject
falls into a deeper sleep stage, the distance increases as a
consequence of the AR process changes. Additionally, the
central frequency reflects that the Awake state is related to
higher frequencies of the spectra, and Stage 4 is linked to
slow waves (Figure 2).
Figure 2. Central Frequency
The analysis of the data showed that the % relative
energy band of Delta 1 contained more spectral power.
Figure 3 below shows the % relative energy band of delta 1
waves for different sleep stages.
Figure 3. Relative spectral energy band of delta 1.
The remaining % relative energy bands for other
waveforms of the EEG signals are shown in Figure 4. It is
observed that the % relative energy band of beta 2 waves is
a good feature to be used to distinguish between different
sleep stages.
IV. DISCUSSION
The results demonstrate that the extracted features
provide promising possibilities to distinguish between
different sleep stages. It is also evident that REM is quite
difficult to separate from other sleep stages due to its
spectral overlap with those of the other stages. The EOG
signal may be a more discerning signal for the detection of
REM activity. Therefore, detection of the REM stage from
EEG signal analysis remains a challenging research topic
that warrants further investigation.
Figure 4. Percent relative spectral energy bands for different
waves in the EEG signals .
V. CONCLUSION
The Itakura distance and central frequency seem to
provide promising features for classification of sleep stages.
However, the high variance of these measures causes the
mean values of the features to overlap and makes sleep
staging by conventional statistics difficult. Therefore, neuro-
fuzzy classifiers are being developed to facilitate this
process.
REFERENCES
[1] Rechtschaffen A. Kales A. Rechtschaffen A. Kales A, eds. “A
Manual of standardized terminology, techniques and scoring
system for sleep stages of human subjects. Los Angeles: Brain
Information service/Brain Research Institute, 1968
[2] W. B. Meldenson “Human Sleep, Research and Clinical Care”
Plenum Medical Book Company New York and London”, 1987,
pp 6-12.
[3] J. G. Webster “Medical Instrumentation, Application and
Design” Third Edition , Wiley 1998, pp 165-171
[4] M. H. Hayes “Statistical Digital Signal Processing and
Modeling”, Wiley, 1996 pp. 194,198-199, 440, 447.
[5] P. Van Hese, W. Philips, J. De Koninck. R.Van de Walle, and I.
Lemahieu. “Automatic Detection of Sleep Stages Using the
EEG. Proceedings of the 23
rd
Annual EMBS International
Conference, October 25-28, Istanbul, Turkey, 2001, pp. 1944-
1947.
[6] K. Donohue and C. Scheib,\EEG fractal response to
anesthetic gas concentration." Available at:
www.engr.uky.edu/_donohue/eeg/pre1/EEGpre2.html.
[7] X. Kong, N. Thakor, and V. Goel, “Characterization of the EEG
Signal Changes Via Itakura Distance”, IEEE-EMBC and
CMEC, 1995, pp. 873-874
[8] J. Muthuswamy, and N. V. Thakor, “Spectral Analysis methods
for neurological signals” Journal of Neuroscience Methods,
1998, pp. 1-14.
199
... Compared with other biological signals such as Magnetoencephalography (MEG), Electrocorticography (ECoG), Intracortical Neuron Recording, Functional Magnetic Resonance Imaging (fMRI), or Computed Tomography Scan (CT scan), EEG has features like high sensitivity, noninvasivity, high temporal resolution, better economical handling, etc (Michel and Murray 2012), making it been considered best for cognitive studies (Cremades et al. 2004). In the field of modern clinical medicine, EEG is also the most important tool in studying sleep (Estrada et al. 2004). Typically EEG can be categorised into five main frequency bands: Delta band (0-4Hz, generally present in certain states of deep sleep, meditation or deep relaxation), Theta band (4-8Hz, generally present in states of relaxation, daydreaming, and light sleep, including the early stages of the sleep cycle), Alpha band (8-13Hz, most prominently observed in a relaxed but awake state with closed eyes), Beta band (13-30Hz, generally associated with states of active, alert wakefulness and cognitive engagement) and Gamma band (over 30Hz, generally associated with advanced cognitive processes, including perception, attention, memory, and consciousness) (Gogna et al. 2021). ...
... Stage 2 is manifested by the low voltage waves of stage 1 mixed with the K-complex (consists of a brief negative high-voltage peak, followed by a slower positive complex around 450 ms and a final negative peak at 900 ms) (Cash et al. 2009) and sleep spindle waves (bursts of waves with a frequency of 12-15 Hz and a spindle shape) (Hori et al. 2001). In stage 3, low voltage background waves that distinguish the spindles are replaced by high amplitude, low-frequency Delta waves (Estrada et al. 2004). During REM (stage R), the EEG continues to show low amplitude, mixed frequency activity without K-complex or sleep spindles (Berry et al. 2012). ...
Article
Full-text available
Sleep is an essential part of human life, and the quality of one’s sleep is also an important indicator of one’s health. Analyzing the Electroencephalogram (EEG) signals of a person during sleep makes it possible to understand the sleep status and give relevant rest or medical advice. In this paper, a decent amount of artificial data generated with a data augmentation method based on Discrete Cosine Transform from a small amount of real experimental data of a specific individual is introduced. A classification model with an accuracy of 92.85% has been obtained. By mixing the data augmentation with the public database and training with the EEGNet, we obtained a classification model with significantly higher accuracy for the specific individual. The experiments have demonstrated that we can circumvent the subject-independent problem in sleep EEG in this way and use only a small amount of labeled data to customize a dedicated classification model with high accuracy.
... In the past few decades, scholars have proposed some automatic sleep staging methods based on machine learning [1,3,7,[12][13][14]24]. Agarwal and Gotman [1] applied Maximum Overlap Wavelet Transform and Shift Invariant Transform to extract features in the time and frequency domains, and then applied Support Vector Machine (SVM) for sleep stage classification. ...
... They obtained accuracy of 80.6% in ISRUC-Sleep dataset. Estrada et al. [7] proposed three different schemes to extract the characteristics of EEG signals: relative spectral band energy, harmonic parameters and Itakura distance. See et al. [24] applied sample entropy and the power spectrum of the harmonic parameters of the infinite impulse response filter and wavelet transform to extract features from the EEG data obtained from the Physionet database, and applied SVM for sleep stage classification. ...
Article
Full-text available
With the development of telemedicine and edge computing, edge artificial intelligence (AI) will become a new development trend for smart medicine. On the other hand, nearly one-third of children suffer from sleep disorders. However, all existing sleep staging methods are for adults. Therefore, we adapted edge AI to develop a lightweight automatic sleep staging method for children using single-channel EEG. The trained sleep staging model will be deployed to edge smart devices so that the sleep staging can be implemented on edge devices which will greatly save network resources and improving the performance and privacy of sleep staging application. Then the results and hypnogram will be uploaded to the cloud server for further analysis by the physicians to get sleep disease diagnosis reports and treatment opinions. We utilized 1D convolutional neural networks (1D-CNN) and long short term memory (LSTM) to build our sleep staging model, named CSleepNet. We tested the model on our childrens sleep (CS) dataset and sleep-EDFX dataset. For the CS dataset, we experimented with F4-M1 channel EEG using four different loss functions, and the logcosh performed best with overall accuracy of 83.06% and F1-score of 76.50%. We used Fpz-Cz and Pz-Oz channel EEG to train our model in Sleep-EDFX dataset, and achieved an accuracy of 86.41% without manual feature extraction. The experimental results show that our method has great potential. It not only plays an important role in sleep-related research, but also can be widely used in the classification of other time sequences physiological signals.
... The cross-correlation coefficient (CCC) values between EEG and EOG [39] and mean absolute error (MAE) in power spectral density (PSD) of the corresponding frequency bands of different sleep stages (Wake: 8-12Hz, N1: 4-8Hz, N2: 4-8Hz, N3: 0.5-3Hz, REM: 14-30Hz) [40]- [42] in raw EEG signal and processed EEG signal were used to quantitatively evaluate the EEG signal characteristics before and after intraartifacts removal. CCC is widely used to measure the degree of correlation between two signals, with larger values indicating a higher degree of correlation between the two signals. ...
Article
Full-text available
Elimination of intra-artifacts in EEG has been overlooked in most of the existing sleep staging systems, especially in deep learning-based approaches. Whether intra-artifacts, originated from the eye movement, chin muscle firing, or heart beating, etc., in EEG signals would lead to a positive or a negative masking effect on deep learning-based sleep staging systems was investigated in this paper. We systematically analyzed several traditional pre-processing methods involving fast Independent Component Analysis (FastICA), Information Maximization (Infomax), and Second-order Blind Source Separation (SOBI). On top of these methods, a SOBI-WT method based on the joint use of the SOBI and Wavelet Transform (WT) is proposed. It offered an effective solution for suppressing artifact components while retaining residual informative data. To provide a comprehensive comparative analysis, these pre-processing methods were applied to eliminate the intra-artifacts and the processed signals were fed to two ready-to-use deep learning models, namely two-step hierarchical neural network (THNN) and SimpleSleepNet for automatic sleep staging. The evaluation was performed on two widely used public datasets, Montreal Archive of Sleep Studies (MASS) and Sleep-EDF Expanded, and a clinical dataset that was collected in Huashan Hospital of Fudan University, Shanghai, China (HSFU). The proposed SOBI-WT method increased the accuracy from 79.0% to 81.3% on MASS, 83.3% to 85.7% on Sleep-EDF Expanded, and 75.5% to 77.1% on HSFU compared with the raw EEG signal, respectively. Experimental results demonstrate that the intra-artifacts bring out a masking negative impact on the deep learning-based sleep staging systems and the proposed SOBI-WT method has the best performance in diminishing this negative impact compared with other artifact elimination methods.
... Even though those curves are not the clearest (due to the 100 Hz sample rate), they still offer clear patterns, and the median, maxima, and energy values of the FFTs are valuable for the classifier. The final metrics extracted from the signals are the coefficients of a third-order, autoregressive model [33,34] and the correlation coefficients between the accelerometer and gyroscope signals. Finally, a total of 81 features were extracted, and all of them are scaled in a [−1, 1] range. ...
Article
Full-text available
Assistive technologies (ATs) often have a high-dimensionality of possible movements (e.g., assistive robot with several degrees of freedom or a computer), but the users have to control them with low-dimensionality sensors and interfaces (e.g., switches). This paper presents the development of an open-source interface based on a sequence-matching algorithm for the control of ATs. Sequence matching allows the user to input several different commands with low-dimensionality sensors by not only recognizing their output, but also their sequential pattern through time, similarly to Morse code. In this paper, the algorithm is applied to the recognition of hand gestures, inputted using an inertial measurement unit worn by the user. An SVM-based algorithm, that is aimed to be robust, with small training sets (e.g., five examples per class) is developed to recognize gestures in real-time. Finally, the interface is applied to control a computer’s mouse and keyboard. The interface was compared against (and combined with) the head movement-based AssystMouse software. The hand gesture interface showed encouraging results for this application but could also be used with other body parts (e.g., head and feet) and could control various ATs (e.g., assistive robotic arm and prosthesis).
Chapter
One of the most important concerns in everyday life is sleep difficulties. Many physiological diseases are caused due to less sleep or more stress. The phases of sleep that the brain experiences while sleeping could be utilized to diagnose certain sleep disorders. Today, a lot of potential biomarkers like the EEG, EOG, and ECG signals help determine the phases and disorders of sleep. The process of identifying the various phases of sleep is subjective and time-consuming because it mostly involves human specialists. So, under this study, the goal of automation in sleep stage classification using a specific channel in EEG signal was focused upon. Fpz-Cz channel was used for extracting sleep behavior. The EEG waves are filtered before being divided into 30 s-long epochs. The Butterworth filter having a 0.5–40 Hz pass-band bandwidth was used to filter the signal. Feature extraction includes the spectral resolution used to digitize the signal; another feature used was the Petrosian fractal dimension. For this, the support vector machine (SVM), a straightforward pattern recognition method along with machine learning algorithms, is used to identify the varying stages during sleep. The 5 sleep stages (Awake, NREM1, NREM2, NREM3, and REM) were classified with considerable accuracy of 80.56% by the SVM algorithm (RBF kernel).
Article
Background: Proper maintenance of hypnosis is crucial for ensuring the safety of patients undergoing surgery. Accordingly, indicators, such as the Bispectral index (BIS), have been developed to monitor hypnotic levels. However, the black-box nature of the algorithm coupled with the hardware makes it challenging to understand the underlying mechanisms of the algorithms and integrate them with other monitoring systems, thereby limiting their use. Objective: We propose an interpretable deep learning model that forecasts BIS values 25 s in advance using 30 s electroencephalogram (EEG) data. Material and methods: The proposed model utilized EEG data as a predictor, which is then decomposed into amplitude and phase components using fast Fourier Transform. An attention mechanism was applied to interpret the importance of these components in predicting BIS. The predictability of the model was evaluated on both regression and binary classification tasks, where the former involved predicting a continuous BIS value, and the latter involved classifying a dichotomous status at a BIS value of 60. To evaluate the interpretability of the model, we analyzed the attention values expressed in the amplitude and phase components according to five ranges of BIS values. The proposed model was trained and evaluated using datasets collected from two separate medical institutions. Results and conclusion: The proposed model achieved excellent performance on both the internal and external validation datasets. The model achieved a root-mean-square error of 6.614 for the regression task, and an area under the receiver operating characteristic curve of 0.937 for the binary classification task. Interpretability analysis provided insight into the relationship between EEG frequency components and BIS values. Specifically, the attention mechanism revealed that higher BIS values were associated with increased amplitude attention values in high-frequency bands and increased phase attention values in various frequency bands. This finding is expected to facilitate a more profound understanding of the BIS prediction mechanism, thereby contributing to the advancement of anesthesia technologies.
Conference Paper
Sleep state classification is essential for managing and comprehending sleep patterns, and it is usually the first step in identifying sleep disorders. Polysomnography (PSG), the gold standard, is intrusive and inconvenient for regular/long-term sleep monitoring. Many sleep-monitoring techniques have recently seen a resurgence as a result of the rise of neural networks and advanced computing. Ballistocardiography (BCG) is an example of such a technique, in which vitals are monitored in a contactless and unobtrusive manner by measuring the body's reaction to cardiac ejection forces. A Multi-Headed Deep Neural Network is proposed in this study to accurately classify sleep-wake state and predict sleep-wake time using BCG sensors. This method achieves a 95.5% sleep-wake classification score. Two studies were conducted in a controlled and uncontrolled environment to assess the accuracy of sleep-awake time prediction. Sleep-awake time prediction achieved an accuracy score of 94.16% in a controlled environment on 115 subjects and 94.90% in an uncontrolled environment on 350 subjects. The high accuracy and contactless nature make this proposed system a convenient method for long-term monitoring of sleep states, and it may also aid in identifying sleep stages and other sleep-related disorders. Clinical Relevance- Current sleep-wake state classification methods, such as actigraphy and polysomnography, necessitate patient contact and a high level of patient compliance. The proposed BCG method was found to be comparable to the gold standard PSG and most wearable actigraphy techniques, and also represents an effective method of contactless sleep monitoring. As a result, clinicians can use it to easily screen for sleep disorders such as dyssomnia and sleep apnea, even from the comfort of one's own home.
Chapter
Sleep plays a vital part in humans health. The correct classification of sleep stages provides clinical information that can be used to diagnose sleep disorders in patients. The gold standard for sleep evaluation is polysomnography. On the other hand, polysomnography has several drawbacks, such as expense, inconvenience, and a long wait time at sleep laboratories. Advancements in technology enable the adoption and use of single-channel EEG at in-home sleep monitoring; it provides better sleep-stage accuracy levels. Industry and academic researchers continuously improve automatic sleep-stage classification using machine and deep learning in multi-channel EEG and single-channel EEG. However, the performance evaluation of single-channel against multi-channel EEG was not established. The performance of sleep staging using XGBoost, LGBM and voting classifiers models was verified in multi-channel and single-channel EEG. A similar number of features and hyperparameters were employed for classification in multi-channel and single-channel EEG for sleep-stage classification. Each model used five healthy subjects data from the Sleep-EDFx database. K-fold validation was used for training and testing. XGBoost, LGBM, and voting classifiers classified the sleep stages and achieved an accuracy of 87.96%, 86.24%, 88.25% in multi-channel and 86.81%, 86.81%, 87.1% in single-channel, respectively. The sleep-stage classification results demonstrate that single-channel EEG gives almost similar accuracy to multi-channel EEG.
Conference Paper
Full-text available
We present a method for the detection of sleep stages using the EEG (electroencephalogram). The method consists of four steps: segmentation; parameter extraction; cluster analysis; and classification. The parameters we compared were the parameters of Hjorth, the harmonic parameters and the relative band energy. For cluster analysis we used a modified version of the K-means algorithm. It is shown that the investigated parameters are capable of, extracting information from the EEG relevant for sleep stage scoring. Using the modified K-means algorithm it is possible to find 'similar' segments and hence automate the detection of sleep stages. However, extra information, e.g. the ECG (electrocardiogram) or the EOG (electrooculogram), is probably necessary for a clear discrimination between the different sleep stages.
Article
This paper reviews some novel spectral analysis techniques that are useful for neurological signals in general and EEG signals in particular. First, some drawbacks and limitations of the commonly used Fast Fourier transforms (FFTs) are presented, and then alternative algorithms are outlined. An auto-regressive (AR) modeling based spectral estimation procedure is presented to overcome the problems of lower resolution and 'leakage' effects inherent in the FFT algorithm. For signals which are transient in nature or rapidly time-varying, two alternative algorithms are presented. The first is an adaptive AR parameter estimation algorithm and the second is a wavelet based time-frequency representation algorithm. Finally, a Spectral Distance measure and the Itakura distance measure are presented to quantify the differences between the spectra of two signals in a succinct manner. The application and performance of all the algorithms is illustrated using electroencephalograms (EEGs) recorded in animals during hypoxic asphyxic injury to brain.
Conference Paper
Clinical assessment of neurological system would be greatly facilitated by interpretation of disease- or injury-related changes in electroencephalogram (EEG) signals. Under the assumption that EEG signal can be modeled as an autoregressive (AR) process, Itakura distance is used to measure the similarity of the EEG signals obtained during various phases of the experimental study. The effectiveness of the Itakura distance is demonstrated through its ability to distinguish hypoxia and asphysia as well as to predict recovery following the injury
EEG fractal response to anesthetic gas concentration
  • K Donohue
  • C Scheib
K. Donohue and C. Scheib,\EEG fractal response to anesthetic gas concentration." Available at: www.engr.uky.edu/_donohue/eeg/pre1/EEGpre2.html.
Human Sleep, Research and Clinical Care" Plenum Medical Book Company New York and London
  • W B Meldenson
W. B. Meldenson "Human Sleep, Research and Clinical Care" Plenum Medical Book Company New York and London", 1987, pp 6-12.