Fig 2 - uploaded by Ghanapriya Singh
Content may be subject to copyright.

Citations

... A common preprocessing step is denoising filters, which improve the overall signal quality. 16 The sensor modality under consideration has a significant impact on the filtering type. For standard ML approaches, features are calculated using the time series classification pipeline on segmented data. ...
Article
Full-text available
There are numerous, interrelated, and multi‐dimensional aspects that influence a person's mental health, one of them being stress. Smart wearable technology having physiological and motion sensors has paved the way for real‐time data collection to deliver cutting‐edge information about the stress of individuals. It is now possible to build an Internet of Medical Things (IoMT) system that can recognise the user's stress, revealing the elements that cause stress. However, there are significant gaps in the existing system for stress recognition. To begin with, stress recognition is primarily studied for a specific set of people, such as occupational stress among office working persons or hospital staff during emergency duties, and so forth. Second, most past work on stress recognition has focused on extracting handcrafted traits, which necessitates human interaction and skill. To overcome above mentioned challenges, this work proposes a novel IoMT framework for continuous stress recognition for mental well‐being. This paper presents a hybrid deep learning (DL) approach for automatically retrieving features and classification into various stress states for the IoMT system to address these difficulties. The proposed expanded system gathers data from wearable physiological sensors and feeds it into a convolutional neural network–long short‐term memory (CNN‐LSTM), a hybrid DL classifier. The suggested approach has been tested on the wearable stress and affect detection (WESAD) dataset. It has a reported accuracy of 90.45%, which is higher than previously reported accuracies from existing machine learning (ML) and DL approaches.
... Some researchers developed model classification directly on the raw data. There is no need to feature extraction for the classification [10][11][12][13]. The machine learning model is supported with human-centric help for completing the task. ...
... The bullae index method [42] was also gone through for emphysema classification. The simulation results are also compared with other existing methods like LBP [45],LRM [46], Feature ensemble [47], SFS [48], CNN [49], LBP [50], Fuzzy Decision Tree [51], CNN [52],JWRIULTP [53], and FFO+ELM [22] in terms of several positive or Type I measures like, "accuracy, sensitivity, specificity, precision, NPV, F1 Score, and MCC", and negative or Type II measures like, "FPR, FNR, and FDR". ...
... The analysis on existing techniques using the same benchmark dataset is depicted in [45] LBP 83.59 Karabulut EM et al. [52] CNN 84.25 Nava R et al. [47] Feature ensemble 91.07 Ibrahim MA et al. [48] SFS 80.3 Peng L et al. [53] JWRIULTP 82.14 Chhillar S et al. [46] LRM 91.67 Pei X [49] CNN 90.93 Sorensen L et al. [50] LBP 95.2 Narayanan SJ et al. [51] Fuzzy Decision Tree 86.66 Isaac A et al. [22] FFO+ELM 91.89 Proposed method PO-FCRNN 95.56 ...
Article
Full-text available
Emphysema is a lung disease that occurs due to abnormal alveoli expansion. This chronic disease causes difficulty in breathing which can lead to lung cancer. The progressive destruction of emphysema can be assessed by Computed Tomography (CT) scans and pulmonary function tests. The severity of the disease may extend to a stage where one can risk their life emphasizing the early detection of emphysema. Primary diagnosis can be done using spirometry and CT for early detection of the disease reducing the mortality rates. Difficulties associated with different diagnostic procedures and inter and intraobserver variations have made blooming researches on more computer-aided techniques. This paper intends to develop a computer-aided technique using the improved deep learning strategy. The initial process is image pre-processing, which is performed by histogram equalization and median filtering. Further, the Fuzzy C Means (FCM) clustering is used for segmentation. After segmentation, a new Adaptive Local Ternary Pattern (ALTP) is used for extracting the pattern descriptor, which is further utilized for classification. As a new contribution, the Parameter Optimized-Faster Region Convolutional Neural Network (PO-FRCNN) is developed for performing the diagnosis. The enhancement of pattern formation and deep classification is accomplished by the Improved Red Deer Algorithm (IRDA), which helps to tune the significant parameters that have a positive influence on the accurateness. The benchmark and real-time dataset are used for performing the experimentation. The results show that the proposed method yields the best result and can effectively diagnose emphysema when compared to state-of-the-art techniques.
Chapter
Structures are susceptible to damages caused due to various calamities, including earthquakes. Structural health monitoring (SHM) system provides warnings of such damages and indicates the degradation of their life cycle. In this chapter, first database is created for G20+ model using SAP software. Accelerometer sensors are placed at each joint to detect vibration signals of an earthquake. This vibration signal acts as the raw data for signal processing and hybrid deep‐learning models. This chapter deals with three different approaches to achieve the classification results. The first approach deals with training 1D convolutional neural networks (CNNs) on the vibrational accelerometer data. The second approach deals with training long short‐term memories (LSTMs). The third approach involves 2D CNNs trained on the spectrograms of the vibrational data used for training in the previous approaches. Comparing the results, 1D CNNs outperformed all the other networks used in the chapter.
Chapter
Full-text available
Remote patient monitoring (RPM) are most often used nowadays. RPM enables you to monitor patients in their own homes, at work, in transit, or even on vacation. The condition of patient is monitored by doctor easily from the hospital. The RPM can connect patients and clinicians in order to maintain continuous surveillance of patients. Wearable sensors are used in patient body to monitor the parameters of patients such as heart rate, temperature, blood pressure, glucose level, and oxygen level. The wearable sensors are connected to the Android mobile phone via Bluetooth. Every patient should have Android mobile with internet which receives collected data from all sensors through Bluetooth interface unit. The Android application is shared to patient while register in the hospital, and then the collected data is sent to the hospital. The collected information is transmitted to server in hospital from the Android mobile phone. In hospital, all patient data with unique id is stored in the server. The server is controlled by robotic process automation (RPA) by UI path. The collected data are analyzed by RPA and intimated to the doctor about the patient. RPA is interfaced with server. The RPA is used to automatically update the patient data from hospital server database to the doctor. If the patient is in critical condition, the RPA automatically intimated to the particular doctor via mobile and automatically fix the appointment to that patient and the doctor. The patient condition is known by doctor through Android mobile phone which is registered with hospital with unique id. If the concern doctor is not available, the RPA can fix the appointment to another doctor automatically. With the help of server, the doctor can know the history of patient and start treatment.
Chapter
Artificial intelligence is the most contentious issue in diagnostic and therapeutic medical imaging research today. Numerous artificial intelligence (AI)-based architectures have been created to accomplish a high-precision diagnosis. To expedite therapy, artificial intelligence software analyses X-rays, CT scans, MRIs, and other pictures for opacities as well as assists physicians in diagnosing and managing airway issues in the clinical context. Airway illnesses are the third most significant cause of mortality worldwide, affecting roughly 65 million people and claiming 3 million lives each year. Thus, this chapter examined how artificial intelligence can comprehend medical images of various respiratory illnesses, including cystic fibrosis, emphysema, pneumoconiosis, pulmonary edema and embolism, asthma, and TB. The first section of this chapter focused on ways to enhance care for patients who have respiratory problems. In the next part, we looked at how artificial intelligence may identify and diagnose various airway diseases. Another section of the chapter discussed a recently released study examining researchers’ efforts to analyze airway diseases using multiple machines and deep learning models. Finally, the chapter contained a comparative study based on the kind of airway disease diagnosed, the data set utilized, and performance variables. Additionally, we addressed the evaluation and discussion of our results to convey any new information or insights gleaned from our chapter’s conclusion. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Article
Full-text available
Physiotherapy exercises like extension, flexion, and rotation are an absolute necessity for patients of post stroke rehabilitation (PSR). A physiotherapist uses many techniques to restore movements needs in daily life including nerve re-education, task training, muscle strengthening and uses various assistive techniques. But, a physiotherapist guiding the physiotherapy exercises to a patient is a time-consuming, tedious and costly affair. In the paper, a novel automated system is designed for detecting and recognizing upper limb exercises using an RGB-Depth camera that could guide the patients to perform real-time physiotherapy exercises without human intervention. Hybrid deep learning (HDL) approaches are exploited for the highly accurate and robust system for recognizing physiotherapy exercises of the upper limb for PSR. As a baseline, a deep convolutional neural network (CNN) is designed that automatically extracts features from the pre-processed data and classifies the performed physiotherapy exercise. As the exercise is being performed, to extract and utilize temporal dependencies, architectures of recurrent neural network (RNN) are used. In the CNN-LSTM model, CNN derives useful features that are provided to LSTM thus increasing the accuracy of recognized exercises. To train faster, another hybrid deep learning model, CNN-GRU is implemented where a novel focal loss criterion is used to overcome the drawbacks of standard cross-entropy loss. Experimental evaluation is done using RGB-D data obtained from
Article
A Computer-Aided Diagnosis (CAD) system to assist a radiologist for diagnosing pulmonary emphysema from chest Computed Tomography (CT) slices has been developed. The lung tissues are segmented from the chest CT slices using Spatial Fuzzy C-Means (SFCM) clustering algorithm and the Regions of Interest (ROIs) are extracted using pixel-based segmentation. The ROIs considered for this work are pulmonary emphysematous lesions namely, centrilobular emphysema, paraseptal emphysema and sub-pleural bullae. The extracted ROIs are then validated and labelled by an expert radiologist. From each ROI, features with respect to shape, texture and run-length are extracted. A competitive coevolution model is proposed for Feature Selection (FS). The model makes use of two bio-inspired algorithms namely, Spider Monkey Optimization (SMO) algorithm and Paddy Field Algorithm (PFA) as its building blocks. FS is performed as a wrapper approach, using the bio-inspired algorithms namely, SMO and PFA with the accuracy of the Support Vector Machine (SVM) classifier as the fitness function. Ten-fold cross validation technique is used in training the SVM classifier using the selected features. The model is tested using two datasets: Real-time emphysema dataset and CT emphysema database (CTED) dataset. The accuracy, precision, recall and specificity obtained for both the datasets are (81.95%, 93.74%), (72.92%, 90.61%), (72.92%, 90.61%), (86.46%, 95.3%), which are better compared to the performance of SMO and PFA algorithms applied individually for FS and the CAD system without performing FS.
Article
Full-text available
Structural damage detection is still a stimulating problem due to the complicated non-linear behaviour of the structural system, incomplete sensed data, presence of noise in the data, and uncertainties in both experimental measurement and analytical model. This paper presents the application of a non-linear signal processing tool and artificial intelligence-based methodology for nonparametric damage detection to address the above stated issues. Local mean decomposition (LMD), as an adaptive signal processing technique, is exploited to extract multidimensional damage features over acquired non-linear non-stationary vibration signals. These features are classified into categories, which are then utilized to calculate a damage indicator. Classification of multidimensional feature space has been a challenging issue since its inception. To address this, local gravitation clustering (LGC), a self-evaluating, synergic clustering technique, is employed. The relevance and significance of the process corresponding to the problem have also been an important concern. The outcomes of the whole process prove its proficiency in damage identification. The efficiency of the process is then compared with existing clustering methods on several parameters. The proposed algorithm is also validated for operational and environmental conditions by considering finite cases analogues to physical ailments like temperature, ageing and live loads.
Article
This paper is to estimate the potential of a deep learning method for automatic diagnosis of pulmonary emphysema. In the initial step, the dataset acquisition is performed by gathering a set of real-time dataset and the publicly available benchmark datasets known as the Computed Tomography Emphysema Database. After pre-processing of images, the lung segmentation is performed by the optimized binary thresholding. Here, the improvement of the segmentation is accomplished by the adoption of a hybrid meta-heuristic algorithm with Barnacles Mating Optimization (BMO), and Butterfly Optimization Algorithm (BOA) called Barnacles Mating-based Butterfly Optimization Algorithm (BM-BOA), in such a way to attain the multi-objective function concerning the variance and entropy of the image. Further, the feature descriptor called Weber Local Binary Pattern (WLBP) is used for generating the pattern image and the feature vectors. Two types of machine learning algorithms are used for the classification, in which Neural Network (NN) considers the feature vector from WLBP as input, and the deep learning model called Convolutional Neural Network (CNN) considers the WLBP pattern of the segmented image as input. In the hybrid classification model, the activation function is optimized by the same BM-BOA, which results in classifying the normal lung, mild emphysema, moderate (medium) emphysema, and severe emphysema. According to the experimental results with the comparison over the state-of-art-techniques, the proposed system permits inexpensive and reliable identification of emphysema on digital chest radiography.