Conference PaperPDF Available

Blood Pressure Detection Using CNN-LSTM Model

Authors:

Abstract

Blood pressure (BP) is a key indication that needs to be checked on a regular basis. For maintaining a healthy life normal blood pressure is essential, and a continuous change can cause serious problems related to health such as hypertension, deadly cardio-vascular disorders and kidney failure. Hypertension is one of the leading causes of death in the globe. For the early diagnosis and prevention of fatal occurrences, an effective unobstructed technique is required for BP monitoring on continuous basis. Here, we present a new categorization approach for BP based on PPG signal and CNN-LSTM model. Classification is performed on different levels between Normotension (NT) and Prehypertension (PHT) and between Normotension (NT) and Hypertension (HT). The findings reveal that the classification of normotension vs hypertension yields more accuracy 67.76%, which is slightly higher than normotension vs prehypertension.
11th IEEE International Conference on Communication Systems and Network Technologies
978-1-6654-8038-3/22/$31.00 ©2022 IEEE 262
DOI: 10.1109/csnt.2022.64
Blood Pressure Detection Using CNN-LSTM
Model
Ketan Gupta
Research Scientist
University of The Cumberlands
USA
ketan1722@gmail.com
Nasmin Jiwani
Research Scientist
University of The Cumberlands
USA
nasminjiwani @gmail.com
Neda Afreen
Department of Computer
Engineering
Jamia Millia Islamia
New Delhi, India
neda184441@st.jmi.ac.in
Abstract Blood pressure (BP) is a key indication that
needs to be checked on a regular basis. For maintaining a
healthy life normal blood pressure is essential, and a
continuous change can cause serious problems related to health
such as hypertension, deadly cardio-vascular disorders and
kidney failure. Hypertension is one of the leading causes of
death in the globe. For the early diagnosis and prevention of
fatal occurrences, an effective unobstructed technique is
required for BP monitoring on continuous basis. Here, we
present a new categorization approach for BP based on PPG
signal and CNN-LSTM model. Classification is performed on
different levels between Normotension (NT) and
Prehypertension (PHT) and between Normotension (NT) and
Hypertension (HT). The findings reveal that the classification
of normotension vs hypertension yields more accuracy 67.76%,
which is slightly higher than normotension vs prehypertension.
Keywords Blood Pressure, Convolutional Neural
Network, Long Short Term Memory, PPG Signal
I. INTRODUCTION
Blood pressure (BP) is an essential metric for the
diagnosis of heart disease at early stage as it is connected
with symptoms of hypotension or hypertension. BP is a
measurement of the force exerted on artery walls by the heart
pump as blood circulates throughout the body. Adults should
have a blood pressure reading of 120/80 mmHg.
Hypotension refers to blood pressure that is continuously
low, whereas hypertension refers to blood pressure that is
constantly high [1]. A BP measurement yields three values in
millimetres of mercury(mmHg): diastolic blood pressure
(DBP), systolic blood pressure (SBP), and mean arterial
pressure (MAP). Hypertension can cause a variety of
diseases including cardiovascular disease, cerebral
infarction, stroke and kidney failure. According to the
Countries Health Organization, hypertension is becoming a
major death cause among the developing world. So this has
become a world wide problem that requires an effective live
BP monitoring system for patients self-monitoring [2].
Invasive and noninvasive ways are available for
measuring blood pressure. As invasive technologies have
been shown to effectively and consistently measure blood
pressure, but they are inconvenient to use and can cause
infections in patients. Noninvasive methods of monitoring
blood pressure have been created to make the measurement
process easier and more pleasant, such as electronic
bioimpedance (EBI), ballistocardiography (BCG),
tonometry, and photo plethysmo graphy (PPG). PPG signals
have been used in a variety of ways to predict blood
pressure. Feature extraction using a PPG signal is performed
in various ways, which are then fed into algorithms of
machine learning. For the prediction of blood pressure, many
machine learning models have been built using methods such
as neural network (NN), logistic regression (LR), random
forest (RF ) and support vector machine (SVM) [3].
Here, we present a new categorization approach for BP
based on PPG signal and CNN-LSTM model. Classification
is performed on different levels between Normotension (NT)
and Prehypertension(PHT) and between Normotension (NT)
and Hypertension(HT). CNN and LSTM model is combined
to obtain result in quick training time [4]. Training this
CNN-LSTM network using TF features results in enhanced
classification performance and less training time.
The layout of the paper is as follows: The second portion
introduces literature surveys. Section 3 shows details of the
working model, Section 4 shows our dataset along with
findings, and Section 5 concludes the paper.
II. RELATED WORK
In recent years, there has been a movement toward
employing PPG signals for BP prognosis and disease
classification on the basis of PPG signals. In a study of 168
hemodialysis patients, Tripepi et al. [5] compared
ambulatory blood pressure (ABP) to office blood pressure
(OBP). He concludes from this study that ABP is the
strongest predictor of cardiovascular issue or any other
serious issues caused by BP variations, such as hypertension
and cerebral infarction. After completing a comparison
research of ABP and OBP, Perloff et al. [6] also concludes
this. They discovered that ABP is an independent prognostic
sign in a patient's overall risk profile.
As vagrant BP has become the important signal for
various deadly conditions, as well as for classification of
patients among various stages of hypertension, the demand
2022 IEEE 11th International Conference on Communication Systems and Network Technologies (CSNT) | 978-1-6654-8038-3/22/$31.00 ©2022 IEEE | DOI: 10.1109/CSNT54456.2022.9787648
Authorized licensed use limited to: University of the Cumberlands. Downloaded on June 08,2022 at 23:48:07 UTC from IEEE Xplore. Restrictions apply.
363
for BP prognosis utilizing a non-invasive wearable tool has
arisen. It prompted the creation of a number of models based
on machine. Monte-Moreno et al.[7] built a model that
predicts blood pressure using information taken from PPG
signals. They test different machine learning algorithms such
as NN, LR, RF and SVM to see which one performs better in
predicting blood pressure and discover that RF perfoms
better in all. Kurylyak et al [8] designed a model for
comparing neural networks and linear regression with four
and twenty one input neurons in order to determine which of
the 3 is performing best in estimation of blood pressure.
They discovered that the neural network with twenty-one
neurons input has the highest performance.
As PPG study has advanced significantly in the evaluation of
required indication such as temperature, pulse rate, blood
pressure and rate of respiration, various initiatives are taken
in this area to combine these necessary signs into wearable
devices for real-time inspection. Elgendi et al. [9] reviewed
and analysed various models for predicting blood pressure
using PPG signal. Liang et al. [10] performs research using
morphological features which are being taken from PPG
signals for the prior detection of hypertension using the PPG
signal's anticipated blood pressure. This study's shortcoming
is that it only looked at the relationship between PPG
characteristics and SBP.
III. METHODOLOGY
The noise present in the dataset of PPG signal has an
impact on signal quality, which can cause inaccuracies. The
signal is preprocessed in order to solve this difficulty and to
achieve an optimized outcome by performing denoising and
transformation. The signal is denoised by passing it by a
median filter having kernel size of 23 [11]. For the sake of
determining the mid value, odd kernel value is prefer. Then
the median filter's output is sent through a backward-forward
filter that uses Chebyshev 2 filter’s output as a parameter.
The Chebyshev 2 filter is customized as a low pass filter of
order 4 with a cutoff frequency of 25 and requiring a
minimum attenuation stopband of 10. In the Chebyshev 2
filter, 500 Hz sampling frequency is taken as an argument.
The backward forward filter is used to smooth the edges of a
signal that has been filtered using the median and chebyshev
filters [12]. Then signal is transformed using FastICA which
is popular for Independent component analysis.
Fig 1. PPG signal with noise
Fig 2. Signal after passing through median filter
Fig 3. Signal after passing through Chebyshev 2 filter
Fig 4. Signal after passing through backward forward filter
After pre-processing, CNN-LSTM is used for performing
classification.
CNN- CNN is a deep-learning architecture that is
particularly good at preserving and extracting spatial
properties from input data during training [13]. These use
two very important filters called convolution and pooling
kernels to create multi-dimensional feature maps from within
the data and analyze these feature maps from various
perspectives to highlight their non-linear characteristics. The
convolution kernel is based on the Laplacian filter, which
Authorized licensed use limited to: University of the Cumberlands. Downloaded on June 08,2022 at 23:48:07 UTC from IEEE Xplore. Restrictions apply.
364
finds gradients in data [14]. The pooling layer is used to
extract the most relevant features for the model within its
window. Finally, a fully-connected dense layer is added to
the model to ready it for data categorization [20].
LSTM- An LSTM network is a superior recurrent neural
network design. LSTMs, like RNNs, are capable of capturing
data's time-series dependence. RNNs can preserve long-term
dependencies in data in theory, but they can't do so in
practise. The vanishing gradient problem, which occurs
when RNN units fail to make meaningful adjustments to
their weights after processing long sequences of data, is
another fundamental challenge in RNN [15]. An LSTM
model, on the other hand, can solve these issues by using
"gates," which are the model's constituent parts.
CNN-LSTM model summary is shown in figure 5.
Fig 5. CNN-LSTM model summary
IV. EXPERIMENTAL RESULT
In this section, we compare and evaluate the model's
performance using dataset.
A. Dataset
Patients data is collected by Guilin People's Hospital in
China. After gaining written agreement and approval from
the Guilin People Institution's ethics committee, complete
medical data of 219 patients is combined. The PPG signal of
individuals with cardiovascular diseases (CVD), diabetes,
insufficient blood supply and cerebral infarction is included
in the dataset [16]. This freely available dataset seeks to aid
in the prediction of CVD using PPG, as well as to estimate
and investigate the link between CVD and PPG by studying
the PPG signal.
The dataset contains segments of 657 PPG signal from 219
individuals ranging in age from 20 to 89 years old, as well as
their hypertension and diabetes histories. Weight, height,
age, diastolic blood pressure, systolic blood pressure, body
mass index(BMI) and heart rate are all included [17].
Patients are classified based on their blood pressure and
levels of diabetes, as well as on cerebrovascular disease and
cerebral infarction history .
The sampling rate of signal is 1 kHz, and each patient's
record contains 2100 sampling points, corresponding to 2.1
seconds length. There are 48 percent males and 52 percent
females among the 219 patients. According to the
hypertension categorization, 36.5 percent of patients have
normal blood pressure, while 36.8% have prehypertension,
15.5 percent have stage 1 hypertension, and 9.2 percent have
stage 2 hypertension respectively. Insufficiency of cerebral
blood supply and Cerebrovascular disease affect 9.5 percent
of patients, while insufficiency of cerebral blood supply and
cerebrovascular disease affect just 5% and 6.8% of patients,
respectively [18].
B. Result
Using CNN-LSTM network, we conducted an experiment to
categorize BP based on PPG data. A confusion matrix is
employed in this work to visualize classifier performance for
a set of data with known true values [19]. The confusion
matrix can be calculated using the training values. The CNN-
LSTM network's confusion matrix from the training phase is
presented in Fig. 6. TP, FP, TN, FN, Ac, Sp and Se were
among the assessment indices utilized to completely analyze
the testing models.
Ac denotes accuracy, Sp denotes specificity and Se denotes
sensitivity. The TP, FP, TN, and FN values are used to
calculate the aforementioned six calculations. The
classification performance of our suggested technique is
shown in Table 1.
(a) (b)
Fig 6. Confusion matrix of (a) NT vs PHT (b) NT vs HT
Authorized licensed use limited to: University of the Cumberlands. Downloaded on June 08,2022 at 23:48:07 UTC from IEEE Xplore. Restrictions apply.
365
TABLE 6. Classification performance
Trials
Train
Accuracy
Test
Accuracy
Sensitivity
Specificity
NT vs
PHT
73.7%
61.07%
55.9%
64.4%
NT vs HT
93.59%
67.76%
68.4%
66.6%
The classification accuracy of normotension vs hypertension
yields more accuracy 67.76% for 300 epochs, which is
slightly higher as compare to normotension vs
prehypertension for 150 epoch. Also the training accuracy of
nomotension vs hypertension is higher which is 93.59%.
In previous study, KNN yields 63.92% F1 score for NT vs
PHT and 62.26% F1 score for NT vs HT while Adaboost
gets 66.88% F1 score for NT vs PHT and 53.19% for NT vs
HT.
Training and validation loss and accuracy is shown in figure
7 and 8.
(a) (b)
Fig 7. Training and validation accuracy of (a) NT vs PHT (b) NT vs
HT
(a) (b)
Fig 8. Training and validation loss of (a) NT vs PHT (b) NT vs HT
V. CONCLUSION
The rise in patient mortality rates as a result of high
blood pressure is frightening, and it has prompted a slew of
new developments in BP prediction and live monitoring.
Here, classification of BP is performed using PPG signal and
CNN-LSTM model. Classification is performed on different
levels between Normotension(NT) and Prehypertension
(PHT) and between Normotension(NT) and
Hypertension(HT). This study uses the PPG dataset from
Guilin University to predict blood pressure. This paper
describes the various configuration layers and their
performance, which aids to understand about the model
reaction to different layers of configuration. The findings
reveal that the classification of normotension vs hypertension
yields more accuracy 67.76%, which is slightly higher than
normotension vs prehypertension. In future studies, larger
sample sizes could be used to improve the effectiveness of
BP classification based on PPG signal.
REFERENCES
[1] M. Sameer and B. Gupta, “Beta Band as a Biomarker for
Classification between Interictal and Ictal States of Epileptical
Patients,” in 2020 7th International Conference on Signal
Processing and Integrated Networks (SPIN), 2020, pp. 567570,
doi: 10.1109/SPIN48934.2020.9071343.
[2] S. K. B. Sangeetha, N. Afreen, and G. Ahmad, “A Combined
Image Segmentation and Classification Approach for COVID-19
Infected Lungs,” J. homepage http//iieta. org/journals/rces, vol. 8,
no. 3, pp. 7176, 2021.
[3] M. Sameer, A. K. Gupta, C. Chakraborty, and B. Gupta,
“Epileptical Seizure Detection: Performance analysis of gamma
band in EEG signal Using Short-Time Fourier Transform,” in
2019 22nd International Symposium on Wireless Personal
Multimedia Communications (WPMC), 2019, pp. 16, doi:
10.1109/WPMC48795.2019.9096119.
[4] A. Mahajan, K. Somaraj, and M. Sameer, “Adopting Artificial
Intelligence Powered ConvNet To Detect Epileptic Seizures,” in
2020 IEEE-EMBS Conference on Biomedical Engineering and
Sciences (IECBES), 2021, pp. 427432, doi:
10.1109/IECBES48179.2021.9398832.
[5] R. Ekart, V. Kanič, B. Pečovnik Balon, S. Bevc, and R. Hojs,
“Prognostic Value of 48-Hour Ambulatory Blood Pressure
Measurement and Cardiovascular Mortality in Hemodialysis
Patients,” Kidney Blood Press. Res., vol. 35, no. 5, pp. 326331,
2012, doi: 10.1159/000336357.
[6] D. Perloff, M. Sokolow, and R. Cowan, “The Prognostic Value of
Ambulatory Blood Pressures,” JAMA, vol. 249, no. 20, pp. 2792
2798, May 1983, doi: 10.1001/jama.1983.03330440030027.
[7] E. Monte-Moreno, Non-invasive estimate of blood glucose and
blood pressure from a photoplethysmograph by means of machine
learning techniques,” Artif. Intell. Med., vol. 53, pp. 127138, Jun.
2011, doi: 10.1016/j.artmed.2011.05.001.
[8] Y. Kurylyak, F. Lamonaca, and D. Grimaldi, “A Neural Network-
based method for continuous blood pressure estimation from a
PPG signal,” 2013 IEEE Int. Instrum. Meas. Technol. Conf., pp.
280283, 2013.
[9] M. Elgendi et al., “The use of photoplethysmography for assessing
hypertension,” npj Digit. Med., vol. 2, no. 1, p. 60, 2019, doi:
10.1038/s41746-019-0136-7.
[10] Y. Liang, Z. Chen, R. Ward, and M. Elgendi, “Hypertension
Assessment Using Photoplethysmography: A Risk Stratification
Approach,” J. Clin. Med., vol. 8, no. 1, p. 12, Dec. 2018, doi:
10.3390/jcm8010012.
Authorized licensed use limited to: University of the Cumberlands. Downloaded on June 08,2022 at 23:48:07 UTC from IEEE Xplore. Restrictions apply.
366
[11] N. Nasir, N. Afreen, R. Patel, S. Kaur, and M. Sameer, “A
Transfer Learning Approach for Diabetic Retinopathy and
Diabetic Macular Edema Severity Grading,” Rev. d’Intelligence
Artif., vol. 35, pp. 497502, Dec. 2021, doi: 10.18280/ria.350608.
[12] M. Sameer and B. Gupta, “ROC Analysis of EEG Subbands for
Epileptic Seizure Detection using Naive Bayes Classifier,” J. Mob.
Multimed., pp. 299310, 2021.
[13] M. Sameer and B. Gupta, “Time–Frequency Statistical Features of
Delta Band for Detection of Epileptic Seizures,” Wirel. Pers.
Commun., 2021, doi: 10.1007/s11277-021-08909-y.
[14] S. M. Beeraka, A. Kumar, M. Sameer, S. Ghosh, and B. Gupta,
“Accuracy Enhancement of Epileptic Seizure Detection: A Deep
Learning Approach with Hardware Realization of STFT,”
Circuits, Syst. Signal Process., 2021, doi: 10.1007/s00034-021-
01789-4.
[15] S. Gupta, M. Sameer, and N. Mohan, “Detection of Epileptic
Seizures using Convolutional Neural Network,” in 2021
International Conference on Emerging Smart Computing and
Informatics (ESCI), 2021, pp. 786790, doi:
10.1109/ESCI50559.2021.9396983.
[16] P. Porwal et al., “Indian Diabetic Retinopathy Image Dataset
(IDRiD): A Database for Diabetic Retinopathy Screening
Research,” Data , vol. 3, no. 3. 2018, doi: 10.3390/data3030025.
[17] M. Sameer and P. Agarwal, “Coplanar waveguide microwave
sensor for label-free real-time glucose detection,”
Radioengineering, vol. 28, no. 2, p. 491, 2019.
[18] M. Sameer and B. Gupta, “Detection of epileptical seizures based
on alpha band statistical features,” Wirel. Pers. Commun., vol.
115, no. 2, pp. 909925, 2020, doi: 10.1007/s11277-020-07542-5.
[19] M. Sameer, A. K. Gupta, C. Chakraborty, and B. Gupta, “ROC
Analysis for detection of Epileptical Seizures using Haralick
features of Gamma band,” in 2020 National Conference on
Communications (NCC), 2020, pp. 15, doi:
10.1109/NCC48643.2020.9056027.
[20] N. Afreen, R. Patel, M. Ahmed, and M. Sameer, “A Novel
Machine Learning Approach Using Boosting Algorithm for Liver
Disease Classification,” in 2021 5th International Conference on
Information Systems and Computer Networks (ISCON), 2021, pp.
15.
Authorized licensed use limited to: University of the Cumberlands. Downloaded on June 08,2022 at 23:48:07 UTC from IEEE Xplore. Restrictions apply.
... [7].Conventional answers suffer from numerous drawbacks, along with low scalability, dynamic switching of sources, and inefficiency. [8].However, cloud computing algorithms can provide more robust and efficient control of assets in MANETs through allowing customers to make choices based on actual-time facts evaluation and superior modeling strategies. Moreover, cloud-based aid allocation strategies can lessen the danger of resource over-allocation that can purpose congestion or different nice of service (Qi's) problems. ...
Conference Paper
Full-text available
Cellular advert Hoc Networks (MANETs) isself-organizing wireless networks that lack the physicalinfrastructure and centralized manage of conventionalnetworks. With a purpose to enable powerful communiqué, it'smiles important to deal with the resource allocation problem inMANETs. Recently, Cloud Computing algorithms were used toevaluate aid allocation techniques in MANETs. Thosealgorithms contain sampling strategies, which include nearestneighbor, Monte Carlo, and hybrid strategies, to optimize theuseful resource utilization levels. This allows for a more correctassessment of the to be had assets in MANETs. Moreover,these algorithms also can be carried out to autonomouslycontrol and time table communication flows with a view toimprove standard network performance. This paper affords anoutline of the existing aid allocation techniques and theirevaluation in MANETs the usage of cloud computingalgorithms. It also discusses the impacts of the useful resourcecontrol selections in MANETs and the challenges confronted indeploying cloud computing algorithms in these settings. Lastly,a fixed of future studies directions is proposed to in additionimprove resource allocation techniques in MANETs
... Latency is a key component in figuring out how quickly algorithms will reply in server less cloud computing [8]. Algorithms which can be designed to be efficient and respond speedy will bring about better performance and faster response times in comparison to algorithms that take an extended time to method requests [9].Moreover, some algorithms require greater resources to deal with increased demand, which can also lower performance. To lessen latency, the algorithms have to be optimized for the particular task and environment, and frequently up to date to ensure they remain powerful. ...
Conference Paper
Full-text available
Server less cloud computing has won vast recognition in current years because of its capability to provide green and price-effective solutions for numerous computing needs. On this model, cloud carriers manage the scaling and provisioning of sources, permitting users to consciousness totally on their application code. As an end result, server less computing gives an appealing opportunity to standard server-primarily based computing, in which customers needed to control the whole infrastructure themselves. However, with the rise of server less computing, there was a growing challenge about the effectiveness of algorithms on these surroundings. Traditional algorithms are designed to run on a set of resources, which may not translate properly to the dynamic and elastic nature of server less structures. This has caused a want for studies in developing algorithms mainly tailor-made for server less computing. One fundamental assignment on this place is the optimization of resources and price inside the server less environment. Since customers simplest pay for the resources they use, it's far critical to layout algorithms that minimize useful resource consumption and value. Previous research has already indicated that conventional algorithms that rely upon pre-determined aid allocation won't carry out nicely within the server less model. Moreover, server less systems additionally introduce new challenges in phrases of useful resource availability, network latency, and processing delays due to the pay-in keeping with-use model.
... The fast development of Wi-Fi verbal exchange era and cloud computing has enabled the extended deployment of hybrid access networks in far flung areas. [6].This hybrid community structure is a mixture of mobile networks with huge region community (WAN) on the center. The elevated adoption of cloud computing technology within the community architecture has led to a want of extended performance for the center WAN visitors distribution. ...
Conference Paper
Full-text available
Cloud computing enabled Wi-Fi networks provide an outstanding opportunity for corporations to distribute extensive area community (WAN) site visitors and advantage from reduced latency because of the proliferation of hyper connectivity. The feasibility of implementing a Wide Area Network (WAN) traffic distribution system in cloud computing enabled wireless networks was studied. The study aimed to determine the practicality and effectiveness of this approach in improving network performance. For the experiment, a sample network topology was designed to simulate real-world scenarios and subjected to diverse traffic loads. The results showed that the WAN traffic distribution system was indeed feasible and could effectively distribute and balance network traffic in cloud computing enabled wireless networks. Furthermore, key metrics such as network latency, packet loss, and throughput were significantly improved compared to traditional networks without the WAN distribution system. Further research and development in this area could potentially lead to improved network efficiency and enhanced user experience in the rapidly growing field of cloud computing. Those consequences reveal the feasibility of deploying WAN site visitors distribution in cloud computing-enabled wireless networks, and serves as a starting point for further study aiming to improve the performance of such networks.
... , strategies consisting of the latest transfer and statistics augmentation were evolved. transfer modern-day consists of leveraging a pre-skilled community to quickly classify facts, even as data augmentation produces a giant style of examples from existing information sources [11]. similarly to the demanding situations posed by records acquisition, some other barriers for applying DCNNs to medical picture segmentation duties is the shortage latest robustness modern-day the set of rules while implemented to scientific imaging datasets. ...
... First, it can be challenging to accurately capture all the features and details of a product in a product description, meaning any images generated from the descriptions may not effectively represent the actual product. Additionally, as generative AI technology is still in its infancy, it remains to be seen whether or not it can accurately generate images consistent with product descriptions at scale [12]. Other challenges with using generative AI to generate images is that the technology can only be as good as the data from which it is learning. ...
... Prostate most cancers is the maximum frequently diagnosed most cancers type in men, with over 1.three million new cases reported every year. Early detection and accurate analysis is paramount to successfully deal with and control the ailment [15]. but, the traditional Diagnostics models for prostate cancer prognosis has some issues which may limit maximum accuracy and efficiency [16].the ...
Article
Full-text available
Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME) are complication that occurs in diabetic patient especially among working age group that leads to vision impairment problem and sometimes even permanent blindness. Early detection is very much needed for diagnosis and to reduce blindness or deterioration. The diagnosis phase of DR consumes more time, effort and cost when manually performed by ophthalmologists and more chances of misdiagnosis still there. Research community is working on to design computer aided diagnosis system for prior detection and for DR grading based on its severity. Ongoing researches in Artificial Intelligence (AI) have set out the advancement of deep learning technique which comes as a best technique to perform analysis and classification of medical images. In this paper, research is applied on Resnet50 model for classification of DR and DME based on its severity grading on public benchmark dataset. Transfer learning approach accomplishes the best outcome on Indian Diabetic Retinopathy Image Dataset (IDRiD).
Article
Full-text available
Lung infection or sickness is one of the most common acute ailments in humans. Pneumonia is one of the most common lung infections, and the annual global mortality rate from untreated pneumonia is increasing. Because of its rapid spread, pneumonia caused by the Coronavirus Disease (COVID-19) has emerged as a global danger as of December 2019. At the clinical level, the COVID-19 is frequently measured using a Computed Tomography Scan Slice (CTS) or a Chest X-ray. The goal of this study is to develop an image processing method for analysing COVID-19 infection in CT Scan patients. The images in this study were preprocessed using the Hybrid Swarm Intelligence and Fuzzy DPSO algorithms. According to extensive computer simulations, the persistent learning strategy for CT image segmentation using image enhancement is more efficient and adaptive than the Medical Image Segmentation (MIS) method. The findings suggest that the proposed method is more dependable, accurate, and simple than existing methods.
Article
Full-text available
Electroencephalogram (EEG) signals, generated during the neuron firing, are an effective way of predicting such seizure and it is used widely in recent days for classifying and predicting seizure activity. But EEG signals generated during an epileptic seizure are highly nonstationary and dynamic in nature and contain very crucial information about the state of the brain. Due to this randomness, the accuracy of analysis of EEG data by conventional and visual methods is reduced drastically. This paper aims at enhancing epilepsy seizure detection using deep learning models with an FPGA implementation of the short-time Fourier transform block. Detection of seizure has been achieved in the following stages: (1) time–frequency analysis of EEG segments using STFT; (2) extraction of frequency bands and features of interest; and (3) seizure detection using convolutional neural network (CNN) and bidirectional long short-term memory (Bi-LSTM). For this work, the Bonn EEG dataset has been used. The maximum error of ~ 0.13% was encountered while the comparison of STFT output generated via proposed hardware architecture vs the output generated via simulation was done. The average classification accuracy of 93.9% and 97.2% was achieved by CNN and Bi-LSTM models, respectively, considering all frequency bands for epileptic and non-epileptic patients.
Article
Full-text available
Significant research has been going in the field of automated epileptical seizure detection using Electroencephalogram (EEG) data. The EEG signal consists of different frequency bands, which correspond to the different emotional and mental activities of the humans. Most of the research work uses the whole frequency spectrum for the detection of seizures. In this paper, first time the proposed automated system utilizing machine learning technique using only alpha band (8–12 Hz). This paper uses Short-time Fourier transform (STFT) due to its high speed and less complexity in hardware implementation to convert EEG data in time–frequency (t–f) plane. As brain oscillations of a person vary in different health conditions, four statistical features have been extracted from t–f plane of alpha band. The detection performance of the features of alpha band has been analyzed on six classifiers using tenfold cross-validation which shows that the Random Forest (RF) classifier gives the best performance among different classifiers for most of the experiments performed. This study has achieved the best classification accuracy of 98% and ROC analysis revealed maximum Area Under Curve (AUC) of 1 to distinguish the seizures and healthy. Hence, the statistical features of the alpha band depict to be a potential biomarker for the real time detection system.
Article
Various research groups are working on the automated detection of epileptic seizures using Electroencephalogram (EEG) data. EEG waveforms are composed of distinct bands of frequencies. Most of the researchers have used a wide range of frequencies or every frequency band of EEG for detection process of epileptic seizures to obtain high accuracy. However, not all frequency bins contain relevant information about seizures, thereby degrading the performance of the detection system. This paper demonstrates the suitability of only delta band (0.5–4 Hz) for the detection of seizures due to epilepsy. The work has been performed in four stages: (1) Short-time Fourier transform (STFT) of EEG data, (2) extraction of delta band from the time–frequency (t–f) plane, (3) calculation of four statistical features (4) performance analysis using Random Forest (RF) classifier. The proposed methodology achieved an average accuracy, specificity and sensitivity of 99.6%, 99.5% and 99.67% respectively between persons suffering from epilepsy and healthy people on Bonn EEG dataset. Proposed work is computationally efficient as it uses only single band which results in small data computation. Its detection time is very short (< 0.5 s) which makes it suitable for real-time clinical application.
Article
This paper presents analysis of Electroencephalograms (EEGs) and subbands (delta, theta, alpha, beta, gamma) using image descriptors for epileptic seizure detection. Short-time Fourier transform (STFT) has been utilized to convert 1-D EEG data into image. All subbands are separated from the time-frequency (t-f) matrix and Haralick features of each subband is fed in the Naïve Bayes (NB) classifier. Receiver operating characteristic (ROC) analysis has been used for performance evaluation of classifier. Among all subbands, gamma band alone shows a maximum AUC of 0.98 to classify between ictal and healthy class, while beta band shows a maximum AUC of 0.96 to differentiate between ictal and interictal class. Significance of this work is it shows the medical advantage of different subbands for the detection process.