Conference PaperPDF Available

Brain Computer Interaction Framework for Speech and Motor Impairment Using Deep Learning

Authors:

Abstract and Figures

Some people may have trouble communicating with others because of motor and speech difficulties brought on by accidents, strokes, or diseases. Paralyzed people often struggle to communicate their demands, which can make even the simplest tasks challenging. People with disorders like dysarthria and amyotrophic lateral sclerosis may have trouble following conversations. The proposed method for automatic detection of everyday basic desires will help those with dysarthria and quadriplegic paralysis has more fulfilling lives. The device accomplishes this by monitoring brain impulses, which it then converts into either perceptible voice commands or messages that can be delivered to a healthcare provider's smart devices, depending on the user's preferences. The suggested method randomly displays an image of one of six basic requirements while using event-related-potentials(ERPs) detected from the Electroencephalogram (EEG) data to determine which need to fulfill. The dataset utilized for train, test, &validation was built using input from 10 participants. The proposed method had a 96.3% success rate.
Content may be subject to copyright.
Brain Computer Interaction Framework for Speech
and Motor Impairment Using Deep Learning
Abhijit T. Somnathe
Electronics and Telecommunication
Engineering, Smt. Kashibai Navale
College of Engineering, Pune, India
abhijittsomnathe@gmail.com
Nipun Sharma
Electronics and Communication
Engineering, Presidency University
Bangalore, Karnataka, India.
nipun.sharma@presidencyuniversity.in
Iftikhar Aslam Tayubi
Faculty of Computing and Information
Technology Rabigh, King Abdulaziz
University, Jeddah, Saudi Arabia
iftikhar.tayubi@gmail.com
Vikas Sharma
School of Computer Science &
Applications, IIMT University,
Meerut, Uttar Pradesh, India
vicky.c610@gmail.com
*Pundru Chandra Shaker Reddy
School of Computer Sicence and
Engineering, Manipal University
Jaipur, Jaipur-303007, Rajasthan, India
chandu.pundru@gmail.com
Mannava Yesubabu
Computing Science and Engineering,
Vardhaman College of Engineering,
Hyderabad, Telangana, India
mannavababu@gmail.com
Abstract Some people may have trouble communicating
with others because of motor and speech difficulties brought on
by accidents, strokes, or diseases. Paralyzed people often
struggle to communicate their demands, which can make even
the simplest tasks challenging. People with disorders like
dysarthria and amyotrophic lateral sclerosis may have trouble
following conversations. The proposed method for automatic
detection of everyday basic desires will help those with
dysarthria and quadriplegic paralysis has more fulfilling lives.
The device accomplishes this by monitoring brain impulses,
which it then converts into either perceptible voice commands
or messages that can be delivered to a healthcare provider's
smart devices, depending on the user's preferences. The
suggested method randomly displays an image of one of six basic
requirements while using event-related-potentials(ERPs)
detected from the Electroencephalogram (EEG) data to
determine which need to fulfill. The dataset utilized for train,
test, &validation was built using input from 10 participants. The
proposed method had a 96.3% success rate.
Keywords EEG, Brain-Computer Interface, Deep Learning,
ERP, CNN
I. INTRODUCTION
Without stimulating any peripheral nerves or muscles, an
EEG-based brain-computer interface (BCI) approach can
detect applicants intents and translate them into commands.
The severe transportation, mobility, educational, and
healthcare needs of people with disabilities have been brought
into sharp focus by the recent COVID-19 pandemic. When it
comes to controlling electronic devices like wheelchairs, an
EEG-based BCI is a superior option for disabled patients
during a COVID19 [1]. There are two main types of BCI
systems: synchronous BCI, in which the user follows the
computer's prompts, and asynchronous BCI, in which the user
takes control of the process. Electroencephalograms (EEGs),
magnetoencephalograms (MEGs), and functional magnetic
resonance imaging (fMRIs) are all non-invasive ways to study
brain activity [2].
The leading BCI technologies rely on EEG. The ease of
utilize and lowcost of EEG-based BCI systems enable a
variety of real-time implementations. In motor-imagery(MI),
one mentally imagines carrying out a physical task without
actually doing so. The subject is performing a concrete action
as a result of this extraordinary sensation. Oscillatory activity
between somatosensory and motor systems is referred to as
sensory-motor rhythms (SMR). At the present time, EEG can
classify a variety of MI signals. After enough practice, these
MI signals can operate the BCI hardware [3]. During
electroencephalography (EEG), a series of electrodes placed
on the scalp record the brain's electrical activity (using
anywhere from 10 to 20 electrodes; see also). Neuronal
electrical activity provides the basis for brain-computer
interfaces. BCI systems can benefit from this voltage range for
a number of uses. A conceptual diagram of a BCI is shown in
Figure 1.
Fig. 1. Interface diagram of the BCI
Sequential operations make up the BCI system and
include signal acquisition, feature extraction from the task,
subset selection from the feature set, mental state
classification, and the generation of feedback signals. When
compared to other neuroimaging methods, EEG's portability,
low cost (particularly when compared to fMRI), ease of use,
and excellent temporal resolution make it a prime candidate
for usage in a BCI setting [4]. EEG is highly recommended
for use in BCI, especially when providing real-time
biofeedback, due to the superior temporal information it
provides and the direct assessment of neural activity it
provides. EEG is a strategy for studying brain function
because the electric potentials recorded from the scalp
represent neuronal activity and have various uses outside
brain-computer interface research [5]. EEG signals have great
temporal resolution because electric fields propagate so
2023 International Conference on Power Energy, Environment & Intelligent Control (PEEIC)
979-8-3503-5776-9/23/$31.00 ©2023 IEEE
1008
2023 International Conference on Power Energy, Environment & Intelligent Control (PEEIC) | 979-8-3503-5776-9/23/$31.00 ©2023 IEEE | DOI: 10.1109/PEEIC59336.2023.10450481
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on March 14,2024 at 03:53:47 UTC from IEEE Xplore. Restrictions apply.
quickly, but they also have drawbacks related. Data gathering
is tedious and limited in scope, all of which can have a
negative impact on the efficiency of learning models. Due to
privacy laws, medical records are typically unavailable.
Cleaning, extracting essential characteristics, and classifying
EEG data can be difficult and time-consuming without the
usage of processing pipelines that use a domain-specific
approach. Good decoding performance may need artifact
removal [6].
In many implementations, BCI systems rely on decoding
pipelines that employ a wide variety of machine learning
techniques. Signal processing and machine learning (ML)
approaches were traditionally used in a pipeline to analyze
EEG data in order to improve signal-to-noise ratio (SNR),
handle EEG artifacts, extract features, and interpret or decode
signals prior to the advent of DL. DL is a subfield of machine
learning that employs strategies that enable a system to
robotically discover and categorize features from rawdata, and
it is based on the use of artificial neural networks [7]. Deep
learning (DL) models are multi-layered ANNs, linear or non-
linear, taken to a deeper level. Even though ERPs have been
employed in a variety of contexts to advance P300-based
spelling BCIs, there is obviously need for further development
and innovation in this area [8].
In addition, most researchers in this area still struggle with
the difficulty of decoupling EEG-signals that arise
concurrently and may potentially be fully/partially overlaid
with functionally important event-related potentials.
Classifying EEG signals is a multistep process that typically
involves two stages. In order to distinguish between the EEG
signals, the dynamic temporal distortion is first measured with
an algorithm. To encode the EEG signals as feature vectors,
one can also use mathematical tools such as elementary
statistics or sophisticated mathematical approaches [9]. Next,
this information is fed into an algorithm that uses methods like
KNN, NNs, SVM, etc. to categorize the information.
However, in order to classify the EEG signal, all of these
approaches first need to develop features. This study
introduces a fresh method for mining vastly instructive factors
from EEG-signals by utilizing CNNs to spot ERPs inside an
EEG-signal. This method could one day help people who are
paralyzed or have speech or motor impairments communicate
with others. Participants were shown a series of photographs
in random order and asked to announce which of six essential
necessities they felt they were lacking.
The rest of the work is structured as pursues: Section2
presents a comprehensive explanation of the publications that
were analyzed, and Section3 details the development and
application of the suggested method. The findings and
discussion are presented in Section4, while the conclusions
and future research are presented in Section5.
II. RELATED WORKS
In 1971 [10], researchers first explored the possibility of
harnessing brain impulses to operate prosthetic limbs. Since
then, EEG-based brain-computer interfaces (BCIs) have
received extensive interest from researchers thanks to their
potential for gaining insight into neural communications while
remaining user-friendly, non-invasive, and inexpensive.
These alluring features of BCI systems have prompted a wide
range of studies during the past twenty years [11]. Past EEG
studies are included here for motor imagery-based cognitive
interfaces (MI-BCIs). BCI systems make use of motor-
imagery, ERP, and steady-state evoked potential (SSEP)
paradigms. Cognitive or motor sensory inputs can elicit ERPs,
which are then processed by the brain. Touch, sight, and sound
are common external stimuli used in ERP-based BCI [12].
However, as of late, considerable attention has been paid to
MI-BCI devices. MI is the mental act of envisioning oneself
physically moving one's limbs or other parts of the body
without really doing so. Movements are mimicked nearly
exactly, and unique brain patterns in sensorimotor regions are
induced during MI activities. Decoding the MI signal from the
EEG was a challenge, but the BCI system overcame it with
the help of feature extraction and classification based on the
MI data [13]. Since the EEG spectrum shifts over time, several
methods of feature extraction are necessary Due to its
applicability to non-stationary signals and its ability to give
refined frequency resolution, WPT has emerged as one of the
most outstanding time-frequency signal-processing methods.
WPT coefficients are crucial for minimizing noise and
compressing signals, and they are employed in a wide variety
of disciplines. For EEG-based BCI systems, researchers have
looked into feature extraction using the three most essential
approaches: time frequency &time-frequency domain. The
results of time-frequency domain approaches for MI-Brain
computer interfaces are promising [14]. The classification-
rate for left-right hand using a spiking neural network (SNN)
classifier was 75.54% when WT was used for feature
extraction.
Brain-computer interfaces rely on EEG signals, and a new
adaptive technique was proposed for automatic feature
extraction and selection in Ref. [15]. Feature vectors are the
basis for machine learning and deep learning classifiers, which
attempt to deduce the user's target. The ability of CNNs to
extract the most distinctive properties for classification has
made them a popular tool in DL approaches for MI-EEG
recognition. Nonlinear ML classifiers, such as ANN and k-
NN, also achieve high performance. Using biological signals,
the authors of Ref. [16] conducted a proportional study of
various ML/DL strategies for motor imagery identification.
However, there are both invasive and non-invasive ways
to acquire signals to track neural activity. Neurosurgical
implantation of microelectrodes to the entire cerebral cortex
or over the entire cerebrum under the scalp is a highly invasive
procedure. High-resolution neural signals are obtained using
this method, but it is not recommended for recording human
brain activity because it can lead to scar tissue and infections.
These are just some of the methods used to record neural
activity. Because of its reliability and ease of use, EEG is the
method of choice [17]. Machine learning and deep learning
are just two branches of artificial intelligence. The term
"machine learning" is utilized to describe an artificial
intelligence that learns on its own or with very little human
input. DL, other side, is a subfield of ML that takes advantage
of more neural network layers to learn from large datasets.
III. PROPOSED METHODOLOGY
There are three stages to the proposed system: In the first,
"data acquisition," stage, the patient concentrates on one of the
six demands while a series of scans are randomly displayed on
screen. The EEG signal is picked up by the wireless EEG
headset, transmitted digitally to the next step. The next stage,
called "signal processing," is split into two parts. In the initial,
"Signal Preprocessing and Digital Filtering," the signal is
filtered digitally. Sub-phase two, "Feature Extraction,"
2023 International Conference on Power Energy, Environment & Intelligent Control (PEEIC)
1009
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on March 14,2024 at 03:53:47 UTC from IEEE Xplore. Restrictions apply.
involves processing the signal with CNNs to mine deep
factors that are highly informative and time-independent with
regards to the incidence of ERP trends. In the last stage, a
determination is made taking into account these factors. In the
third stage, known as "classification," one chooses a need
from among the six basic human wants. This choice is
communicated to the healthcare professional via audible voice
command or via text message, depending on the system
parameters. The block diagram of the system is shown in
Figure2.
In order to progress the accuracy of categorization, pre-
trained systems and more typical ML strategies are utilized.
The first thing that we did was conduct an analysis that
compared how well various pre-trained networks classified
cases of skin cancer using data from the ISBI 2016 dataset.
Next, we used the characteristics attained from these pre-
trained systems in order to train classic classifiers, specifically
the SVM, k-NN, and DT, and then we assessed the
effectiveness of these approaches. Lastly, two different hybrid
models are suggested.
Fig. 2. Work structure of the proposed model
A.
A 14channel wireless EEG-headset was used to record
electrical brain activity using electrodes placed in accordance.
Importantly, the electrodes in this dry EEG headset can be
placed by the user. The obtained EEG signal was increased
because the original signal amplitudes were between 0.5 and
100 V. The acquisition system includes an instrumentation
amplifier that necessity be calibrated for the specifics of the
EEG bio-signal [18]. Hence, it should have an input
impedance of more than 50 G, a frequency response of 0.3 to
35 Hz, and a common-mode rejection ratio of more than 100
dB. After that, A/D was used to convert the signal to a digital
format at a sample rate of 2048 Hz. It can be difficult to glean
meaningful insights from raw EEG data due to the presence of
highfrequency disturbances and arbitrary noise. This is why a
6th order Butterworth band-pass digital filter was used, with a
low cutoff frequency of 1 Hz and a high cutoff frequency of
12 Hz, to digitally filter the incoming EEG signal [19].
B. Feature extraction
The CNNs, which offer significant reward over
conventional classification approaches, are being used in a
novel way to mine highly instructive features from EEG-
signals. They are resistant to noise and can extract deep
features that are both informative and independent of time.
The suggested method is compatible with both the raw EEG-
data series and itsdown-sampled iterations. This is achieved
by pertaining a series of 1-dimensional convolution-kernels,
each of which has a length l that varies with the amount of
time steps and a width k = 14 that is fixed to account for the
amount of EEG-channels.
Assume 14-channel wireless EEG-headset samples the
original EEG data at 2048 samples per second, giving us a
500-millisecond signal (n = 1024).
EEG1 = {D11,D12,D13,D14, . . . ,D1n, }
EEG2 = {D21,D22,D23,D24, . . . ,D2n, }
(1)
EEG3 = {D31,D32,D33,D34, . . . ,D4n, } Eq. (1)
...
EEGk = {Dk1,Dk2,Dk3,Dk4, . . . ,Dkn, }
Where n = 1024 , k = 14.
The EEG data sequences are transformed into new data
sequences with varying frequencies via the moving average.
EEG1= (2)
2023 International Conference on Power Energy, Environment & Intelligent Control (PEEIC)
1010
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on March 14,2024 at 03:53:47 UTC from IEEE Xplore. Restrictions apply.
Each n-channel EEG data sequence can be split into
various frequency-specific sequences by changing the value
of the parameter l. As can be seen in Fig. 3, the duration of
every of these fresh EEG-data series will be (n -l+ 1).
Fig. 3. Design of 1-D Convolution moving mean kernel
Following is max pooling, which takes the vectors from
the convolution layer and adds together the greatest values
across all of them. Each vector's greatest element is taken and
used to construct a newvector called the maximavector. The
new EEG data series and the down-sampled versions will both
undergo the same process [20]. The Concatenated Maxima
Vector is the sum of all the Maxima Vectors from the previous
steps, and it will be used as input. Fig. 4 depicts the (Final
Maxima Vector) that is produced by this layer.
Fig. 4. Maxima vectors concatenation structure
IV. RESULTS AND DISCUSSIONS
The gathered dataset included 4440 P300-labeled EEG
signals displaying ERP patterns, in particular the P300
sample, and 22200 NoneP300-labeled EEG-signals
representing normal brain activity.
The class imbalance issue could only be resolved by either
oversampling the P300 EEG-signals to make them
comparable to the NoneP300 EEG-signals, or by down
sampling the NoneP300 EEG-signals to make them
comparable to the P300 EEG-signals. In order to prevent
potentially inaccurate data redundancy, the second option was
adopted in this work to down-sample the NoneP300 EEG-
signals. To further facilitate training and validation, the
dataset was split 70:30 across the two categories. A CNN was
the first suggested DL-architecture. CNNs use three sets of
one-dimensional (1-D) convolutional layers and a layer
2023 International Conference on Power Energy, Environment & Intelligent Control (PEEIC)
1011
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on March 14,2024 at 03:53:47 UTC from IEEE Xplore. Restrictions apply.
function called ReLU to execute a threshold operation,
wherein input values less than zero are converted to zero.
Training a CNN is sped up and initialization noise is
dampened by inserting a normalization layer amid the
convolutional &threshold layers. The next layer is an
activation function for the output unit, and it is a normalized
exponential called a SoftMax activation layer.
The shown confusion matrix better illustrates the precision
of the classification. The actual class labels are displayed
vertically, whereas their anticipated counterparts are shown
horizontally. True Positive (TP) equals 770, which is the
number of signals for which the CNN classifier correctly
predicted that they had ERP patterns (P300 label). True
Negative (TN) is equals 1046, the number of signals correctly
identified by the CNN classifier as not containing ERP
patterns (NoneP300 label). False Negative (FN) was equal to
112, which is the amount of signals with ERP trends that were
incorrectly classified by the CNN classifier as Non-ERP
Signal.
This part summarizes the outcomes achieved by several
implemented approaches for skin sarcoma categorization
along with the datasets that were utilized and performance
measurements. Also included is information on how the
results were generated. In the next section, "4.1," you will find
information regarding a variety of performance measures that
are utilized. In section4.2, the outcomes that were achieved by
using the suggested approaches are shown.
Fig. 5. Train accuracy diagram
As shown in Eq (3), the CNN-classifier accomplished an
competence performance of 78.41%, defined as the
percentage of datasets for which predictions were properly
labeled (TP + TN). To provide a point of comparison, we also
evaluated an alternative solution based on a long short-term
memory (LSTM) network, which performed worse than the
suggested deep learning CNN at 69.69% accuracy (Acc).
Acc = (3)
According to the findings, there were 4440 P300-labeled
EEG signals and 22200 NoneP300-labeled EEG-signals in the
total dataset. The class imbalance problem means that the
reported accuracy of 83.34% is deceptive; in the dataset. The
dataset was down-sampled to eliminate redundant data that
could have an impact on accuracy and thereby solve the
problem. The dataset was also split into two halves for testing
and development.
The CNNs were the first suggested deep learning
architecture, and their train accuracy ranged from 65.34
percent to 88.89 percent for individuals with a total train
accuracy of 78.22 percent. Using the validation set, the CNN
classifier was able to achieve efficiency of 78.41%. The CM
revealed that 770 signals were suitably anticipated by the
CNN-classifier as containing ERP trends, while 1046 signals
were acceptably anticipated by the CNN classifier as not
containing ERP patterns (NoneP300 label). A second deep
learning architecture was also tried in the study; LSTM-
network performed worse than the suggested CNN design,
with an accuracy of 69.69%. Overall, the results show that
deep learning architectures, and CNN in particular, can
successfully separate P300- and NoneP300-labeled EEG
signals.
V. CONCLUSIONS
The study presents a CNN-based deep learning P300
signal categorization method from EEG signals. It's important
to balance the dataset since the class imbalance amid P300
&NoneP300 signals could bias the classifier, according to the
study. The suggested CNN-architecture has an exactness of
83.34%, whereas volunteer train yields 65.34% to 88.89%.
The study also shows that CNN achieves better results than
LSTM network. Participants' accurate performance greatly
affects categorization accuracy. Thus, users must be pre-
oriented and adapted to the system. In conclusion, using
electrical changes caused by mental activity to create a control
signal between the brain and computer can be crucial for
paralyzed patients who cannot speak or communicate. The
suggested system analyzes mental activity-induced electrical
EEG brain waves and recognizes Event-Related Potential
(ERP) patterns to help sufferers communicate with others. I
think the proposed system met its goals of acceptable
performance and higher accuracy, with mean exactness of
78.41% and a utmost exactness of 96.30%). Thus, using
CNNs to extract highly instructive deep features from EEG-
signals can detect Event-Related Potential (ERP) patterns,
2023 International Conference on Power Energy, Environment & Intelligent Control (PEEIC)
1012
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on March 14,2024 at 03:53:47 UTC from IEEE Xplore. Restrictions apply.
improving the proposed system's performance and accuracy.
The CNN-based strategy for classifying P300 signals from
EEG signals is promising for brain-computer interfaces and
cognitive neuroscience research.
REFERENCES
[1] Ashreetha, B., Devi, M.R., Kumar, U.P., Mani, M.K., Sahu, D.N. and
Reddy, P.C.S., 2022. Soft optimization techniques for automatic liver
cancer detection in abdominal liver images. International journal of
health sciences,6.
[2] Chillakuru, P., Madiajagan, M., Prashanth, K.V., Ambala, S., Shaker
Reddy, P.C. and Pavan, J., 2023. Enhancing wind power monitoring
through motion deblurring with modified GoogleNet algorithm. Soft
Computing, pp.1-11.
[3] Kumar, K., Pande, S.V., Kumar, T., Saini, P., Chaturvedi, A., Reddy,
P.C.S. and Shah, K.B., 2023. Intelligent controller design and fault
prediction using machine learning model. International Transactions
on Electrical Energy Systems,2023.
[4] Muthappa, K.A., Nisha, A.S.A., Shastri, R., Avasthi, V. and Reddy,
P.C.S., 2023. Design of high-speed, low-power non-volatile master
slave flip flop (NVMSFF) for memory registers designs. Applied
Nanoscience, pp.1-10..
[5] Reddy, P.C.S., Pradeepa, M., Venkatakiran, S., Walia, R. and
Saravanan, M., 2021. Image and signal processing in the underwater
environment. J Nucl Ene Sci Power Generat Techno,10(9), p.2..
[6] Shaker Reddy, P.C. and Sucharitha, Y., 2023. A Design and Challenges
in Energy Optimizing CR-Wireless Sensor Networks. Recent
Advances in Computer Science and Communications (Formerly:
Recent Patents on Computer Science),16(5), pp.82-92.
[7] Reddy, P.C., Nachiyappan, S., Ramakrishna, V., Senthil, R. and Sajid
Anwer, M.D., 2021. Hybrid model using scrum methodology for
softwar development system. J Nucl Ene Sci Power Generat
Techno,10(9), p.2.
[8] Rao, K.R., Prasad, M.L., Kumar, G.R., Natchadalingam, R., Hussain,
M.M. and Reddy, P.C.S., 2023, August. Time-Series Cryptocurrency
Forecasting Using Ensemble Deep Learning. In 2023 International
Conference on Circuit Power and Computing Technologies
(ICCPCT) (pp. 1446-1451). IEEE.
[9] Sucharitha, Y., Reddy, P.C.S. and Suryanarayana, G., 2023. Network
Intrusion Detection of Drones Using Recurrent Neural
Networks. Drone Technology: Future Trends and Practical
Applications, pp.375-392.
[10] Patil, S.B., Rao, H.R., Chatrapathy, K., Kiran, A., Kumar, A.S. and
Reddy, P.C.S., 2023, August. Ensemble Deep Learning Framework for
Classification of Skin Lesions. In 2023 International Conference on
Circuit Power and Computing Technologies (ICCPCT) (pp. 1550-
1555). IEEE.
[11] Latha, S.B., Dastagiraiah, C., Kiran, A., Asif, S., Elangovan, D. and
Reddy, P.C.S., 2023, August. An Adaptive Machine Learning model
for Walmart sales prediction. In 2023 International Conference on
Circuit Power and Computing Technologies (ICCPCT) (pp. 988-992).
IEEE.
[12] Sucharitha, Y., Reddy, P.C.S. and Chitti, T.N., 2023, July. Deep
learning based framework for crop yield prediction. In AIP Conference
Proceedings (Vol. 2548, No. 1). AIP Publishing.
[13] Kumar, G.R., Reddy, R.V., Jayarathna, M., Pughazendi, N.,
Vidyullatha, S. and Reddy, P.C.S., 2023, May. Web application based
Diabetes prediction using Machine Learning. In 2023 International
Conference on Advances in Computing, Communication and Applied
Informatics (ACCAI) (pp. 1-7). IEEE.
[14] Yadala, S., Pundru, C.S.R. and Solanki, V.K., 2023, March. A Novel
Private Encryption Model in IoT Under Cloud Computing Domain.
In The International Conference on Intelligent Systems &
Networks (pp. 263-270). Singapore: Springer Nature Singapore.
[15] Shanmugaraja, P., Bhardwaj, M., Mehbodniya, A., VALI, S. and
Reddy, P.C.S., 2023. An Efficient Clustered M-path Sinkhole Attack
Detection (MSAD) Algorithm for Wireless Sensor Networks. Adhoc &
Sensor Wireless Networks,55.
[16] Reddy, P.C., Nachiyappan, S., Ramakrishna, V., Senthil, R. and Sajid
Anwer, M.D., 2021. Hybrid model using scrum methodology for
softwar development system. J Nucl Ene Sci Power Generat
Techno,10(9), p.2.
[17] Reddy, P.C.S., Pradeepa, M., Venkatakiran, S., Walia, R. and
Saravanan, M., 2021. Image and signal processing in the underwater
environment. J Nucl Ene Sci Power Generat Techno,10(9), p.2.
[18] Reddy, P.C.S., Sucharitha, Y. and Narayana, G.S., 2021. Forecasting
of Covid-19 Virus Spread Using Machine Learning
Algorithm. International Journal of Biology and Biomedicine,6.
[19] Ramana, A.V., Bhoga, U., Dhulipalla, R.K., Kiran, A., Chary, B.D. and
Reddy, P.C.S., 2023, June. Abnormal Behavior Prediction in Elderly
Persons Using Deep Learning. In 2023 International Conference on
Computer, Electronics & Electrical Engineering & their Applications
(IC2E3) (pp. 1-5). IEEE.
[20] Madhavi, G.B., Bhavani, A.D., Reddy, Y.S., Kiran, A., Chitra, N.T.
and Reddy, P.C.S., 2023, June. Traffic Congestion Detection from
Surveillance Videos using Deep Learning. In 2023 International
Conference on Computer, Electronics & Electrical Engineering &
their Applications (IC2E3) (pp. 1-5). IEEE.
2023 International Conference on Power Energy, Environment & Intelligent Control (PEEIC)
1013
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on March 14,2024 at 03:53:47 UTC from IEEE Xplore. Restrictions apply.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Since the year 2000, there has been a noticeable increase in the wind energy industry. It may be difficult to notice anomalous fluctuations in wind fan speed using conventional motion blur monitoring image processing technologies, which makes maintaining wind power facilities tough. This may be brought on by the equipment having hidden flaws. To overcome this problem, this research offers a novel motion deblurring technique. To strengthen motion blur rejection and raise the accuracy of monitoring photos, a modified version of GoogleNet is used. This is accomplished by gathering and creating a collection of photographs of wind turbines while considering the local and global motion blur characteristics and orientations. The motion blur photos of wind power inspection are then trained using the modified GoogleNet approach once the dataset has been created. The resultant motion deblurring method (MDA-GN) is very effective and streamlines the necessary computations and parameters. According to experimental data, the suggested motion deblurring approach employing modified GoogleNet outperforms traditional algorithms in terms of signal-to-noise ratio, structural similarity, and speed.
Article
Full-text available
In a solar power plant, a solid phase transformer and an optimization coordinated controller are utilized to improve transient responsiveness. Transient stability issues in a contemporary electrical power system represent one of the difficult tasks for an electrical engineer due to the rise in uncertain renewable energy sources (RESs) as a result of the need for green energy. The potential for terminal voltage to be adversely impacted by this greater RES raises the possibility of electrical device damage. It is possible to use a solid state transformer (SST) or smart transformer to address a transient response issue. These devices are frequently employed to interact between RES and a power grid. SST features a variety of regulated converters to maintain the necessary voltage levels. This method can therefore simultaneously lessen power fluctuations and transient responsiveness. In order to improve the quality of RES power injections and the electrical system’s transient stability, this work provides a controller design for a solar photovoltaic (SPV) system that is connected to the grid by SST. The optimization of a controller model is proposed by modifying a PI controller taken from a commercial one. With the use of IEEE 39 standard buses, the proposed controller is tested. When evaluating the effectiveness of a suggested controller, it is important to take into account a variety of solar radiation patterns as well as a time delay uncertainty that can range from 425 ms to 525 ms. According to simulation results, the proposed controller can be employed to lessen power fluctuation brought on by unpredictable RES. Additionally, the proposed coordinated regulation of SPV and SST can prevent catastrophic damage in the event of substantial disturbances like a circuit breaker collapsing to expand a power line due to a fault by inhibiting significant voltage cycles within an electronic appliance’s rated voltage limit. The results indicate that a transitory stability issue in a modern power system caused by an unforeseen increase in RES may be addressed utilizing the suggested controllers as alternatives.