Article

A reinforcement federated learning based strategy for urinary disease dataset processing

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Urinary disease is a complex healthcare issue that continues to grow in prevalence. Urine tests have proven valuable in identifying conditions such as kidney disease, urinary tract infections, and lower abdominal pain. While machine learning has made significant strides in automating urinary tract infection detection, the accuracy of existing methods is hindered by concerns surrounding data privacy and the time-intensive nature of training and testing with large datasets. Our proposed method aims to address these limitations and achieve highly accurate urinary tract infection detection across various healthcare laboratories, while simultaneously minimizing data security risks and processing delays. To tackle this challenge, we approach the problem as a combinatorial optimization task. We optimize the accuracy objective as a concave function and minimize computation delay as a convex function. Our work introduces a framework enabled by federated learning and reinforcement learning strategy (FLRLS), leveraging lab urine data. FLRLS employs deterministic agents to optimize the exploration and exploitation of urinary data, while the actual determination of urinary tract infections is performed at a centralized, aggregated node. Experimental results demonstrate that our proposed method improves accuracy by 5% and reduces total delay. By combining federated learning, reinforcement learning, and a combinatorial optimization approach, we achieve both high accuracy and minimal delay in urinary tract infection detection.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This study presented federated learning where other trained models are deployed at the different layers and offloaded the trained models to the aggregated node. These methods [12]- [20] of reinforcement learning A3C, SARSA, DDPG, and q-learning widely implemented to achieve the different objectives such as energy, time, prediction, delay, and execution. The convolutional neural network (CNN) based models (ResNet, LeNet, and Relu function) [21], [22] are presented to extract the features of kidney images based on their objectives in the system. ...
... Offloaded local trained and tested model based on equation (4); 12 Trained all local nodes based on equation (5); 13 Federated internal and external states are updated; Determine the decision tree based gini impurity based on equation (13); 16 Determine the decision tree-based entropy based on equation (14); 17 End Internal States; 18 End External States; ...
... The ANOVA method determines the execution of kidney images based on total delays, as shown in equation (7). In this part, the study implemented the different federated and reinforcement learning schemes, e.g., federated learning policy (FLP) and reinforcement federated learning scheme (RFLS) [1], [3], [5], [7], [9], [11], [12], [15], [18]. The study suggested the AFRLS algorithm approach to solve the kidney disease problem. ...
Article
Full-text available
The number of people with kidney disease rises every day for many reasons. Many existing machine-learning-enabled mechanisms for processing kidney disease suffer from long delays and consume much more resources during processing. In this paper, the study shows how federated and reinforcement learning schemes can be used to develop the best delay scheme. The scheme must optimize both the internal and external states of reinforcement learning and the federated learning fog cloud network. This work presents the Adaptive Federated Reinforcement Learning-Enabled System (AFRLS) for Internet of Things (IoT) consumers’ kidney disease image processing. The main relationship between IoT consumers and kidney image is that the data is collected from different IoT consumer sources, such as ultrasound and X-rays in healthcare clinics. In healthcare applications, kidney urinary tasks reduce the time it takes to preprocess federated learning datasets for training and testing and run them on different fog and cloud nodes. AFRLS decides the scheduling on other nodes and improves constraints based on the decision tree. Based on the simulation results, AFRLS is a new strategy that reduces the time tasks need to be delayed compared to other machine learning methods used in fog cloud networks. The AFRLS improved the delay among nodes by 55%, the delay among internal states by 40%, and the training and testing delay by 51%.
... The development of meta-heuristic algorithms is aimed at overcoming numerous challenges encountered in a series of domains. These find applications in network security [24,25], economic emission [26,27], energy optimization [28], parameter optimization [29][30][31], medical diagnosis [32][33][34], fault identification [35], privacy protection [36], and vehicle communication [37]. RUN is an algorithm based on the mathematical concept of Runge-kutta, which leverages the slope-change logic computed using the Runge-kutta method as a global optimization search mechanism. ...
Article
Full-text available
Intrusion detection system (IDS) classify network traffic as either threatening or normal based on data features, aiming to identify malicious activities attempting to compromise computer systems. However, the volume of intrusion-related data is increasing daily, and the redundant features within this data hinder the improvement of IDS classification performance and efficiency. This study introduces a wrapper feature selection model, denoted as bICSRUN-KNN, with Runge–kutta optimization for information-guided communication (ICSRUN) to detect system intrusions. Comparative experiments on the IEEE CEC 2014 benchmark functions demonstrate ICSRUN’s superiority over other algorithms. Subsequently, comparative experiments are conducted using 12 UCI datasets, NSL-KDD, ISCX-URL-2016, ISCX-Tor-NonTor-2017, and LUFlow Network, against competing algorithms. Experimental results demonstrate that the bICSRUN-KNN model achieved remarkable accuracy rates of 98.705% and 98.341% in the binary and multiclass contexts of NSL-KDD. For ISCX-URL-2016, ISCX-Tor-NonTor-2017, and LUFlow Network, accuracy rates of 96.107%, 99.772%, and 88.748% are respectively attained.
... Furthermore, with the increasing value of data in recent years, concerns about user data, especially patient data privacy, have attracted attention. For instance, ref. [155] considers data privacy in smart healthcare work and uses a federated reinforcement learning architecture for data security risk reduction and knowledge sharing at the training-strategy layer. This indicates that even when experiencing algorithmic updates at different stages of research or different considerations at certain levels, the design of reinforcement learning solutions through an integrated framework does not require starting from scratch. ...
Article
Full-text available
Extensive research has been carried out on reinforcement learning methods. The core idea of reinforcement learning is to learn methods by means of trial and error, and it has been successfully applied to robotics, autonomous driving, gaming, healthcare, resource management, and other fields. However, when building reinforcement learning solutions at the edge, not only are there the challenges of data-hungry and insufficient computational resources but also there is the difficulty of a single reinforcement learning method to meet the requirements of the model in terms of efficiency, generalization, robustness, and so on. These solutions rely on expert knowledge for the design of edge-side integrated reinforcement learning methods, and they lack high-level system architecture design to support their wider generalization and application. Therefore, in this paper, instead of surveying reinforcement learning systems, we survey the most commonly used options for each part of the architecture from the point of view of integrated application. We present the characteristics of traditional reinforcement learning in several aspects and design a corresponding integration framework based on them. In this process, we show a complete primer on the design of reinforcement learning architectures while also demonstrating the flexibility of the various parts of the architecture to be adapted to the characteristics of different edge tasks. Overall, reinforcement learning has become an important tool in intelligent decision making, but it still faces many challenges in the practical application in edge computing. The aim of this paper is to provide researchers and practitioners with a new, integrated perspective to better understand and apply reinforcement learning in edge decision-making tasks.
... This study [22] presents a game-theory based security model for FL. The suggested model, known as NVAS, is a FL aggregation system which offers a thorough plan for developing a COVID-19 detection and prevention system that integrates game theory, wireless communication, and AI.The paper [23]presents a novel framework called "Federated Learning and Reinforcement Learning Strategy (FLRLS)" that utilizes lab urine data for detecting urinary tract infections (UTIs). The model is based on reinforcement learning used in FL settings. ...
Article
Full-text available
In the contemporary landscape, machine learning has a pervasive impact across virtually all industries. However, the success of these systems hinges on the accessibility of training data. In today's world, every device generates data, which can serve as the building blocks for future technologies. Conventional machine learning methods rely on centralized data for training, but the availability of sufficient and valid data is often hindered by privacy concerns. Data privacy is the main concern while developing a healthcare system. One of the technique which allow decentralized learning is Federated Learning. Researchers have been actively applying this approach in various domains and have received a positive response. This paper underscores the significance of employing Federated Learning in the healthcare sector, emphasizing the wealth of data present in hospitals and electronic health records that could be used to train medical systems.
Article
Full-text available
In modern healthcare, integrating Artificial Intelligence (AI) and Internet of Medical Things (IoMT) is highly beneficial and has made it possible to effectively control disease using networks of interconnected sensors worn by individuals. The purpose of this work is to develop an AI-IoMT framework for identifying several of chronic diseases form the patients’ medical record. For that, the Deep Auto-Optimized Collaborative Learning (DACL) Model, a brand-new AI-IoMT framework, has been developed for rapid diagnosis of chronic diseases like heart disease, diabetes, and stroke. Then, a Deep Auto-Encoder Model (DAEM) is used in the proposed framework to formulate the imputed and preprocessed data by determining the fields of characteristics or information that are lacking. To speed up classification training and testing, the Golden Flower Search (GFS) approach is then utilized to choose the best features from the imputed data. In addition, the cutting-edge Collaborative Bias Integrated GAN (ColBGaN) model has been created for precisely recognizing and classifying the types of chronic diseases from the medical records of patients. The loss function is optimally estimated during classification using the Water Drop Optimization (WDO) technique, reducing the classifier’s error rate. Using some of the well-known benchmarking datasets and performance measures, the proposed DACL’s effectiveness and efficiency in identifying diseases is evaluated and compared.
Article
Full-text available
With the wider availability of healthcare data such as Electronic Health Records (EHR), more and more data-driven based approaches have been proposed to improve the quality-of-care delivery. Predictive modeling, which aims at building computational models for predicting clinical risk, is a popular research topic in healthcare analytics. However, concerns about privacy of healthcare data may hinder the development of effective predictive models that are generalizable because this often requires rich diverse data from multiple clinical institutions. Recently, federated learning (FL) has demonstrated promise in addressing this concern. However, data heterogeneity from different local participating sites may affect prediction performance of federated models. Due to acute kidney injury (AKI) and sepsis' high prevalence among patients admitted to intensive care units (ICU), the early prediction of these conditions based on AI is an important topic in critical care medicine. In this study, we take AKI and sepsis onset risk prediction in ICU as two examples to explore the impact of data heterogeneity in the FL framework as well as compare performances across frameworks. We built predictive models based on local, pooled, and FL frameworks using EHR data across multiple hospitals. The local framework only used data from each site itself. The pooled framework combined data from all sites. In the FL framework, each local site did not have access to other sites' data. A model was updated locally, and its parameters were shared to a central aggregator, which was used to update the federated model's parameters and then subsequently, shared with each site. We found models built within a FL framework outperformed local counterparts. Then, we analyzed variable importance discrepancies across sites and frameworks. Finally, we explored potential sources of the heterogeneity within the EHR data. The different distributions of demographic profiles, medication use, and site information contributed to data heterogeneity.
Article
Full-text available
The use of artificial intelligence (AI) technology in dentistry provides information that aids clinical decision-making by interpreting big data quickly. This study aims to systematically review the current role of AI in dentistry where it has a significant impact on clinical dentistry. Document collection was done from 1990 to 2022 based on the main themes of AI-assisted dentistry. This document extraction was done by utilizing PubMed, Embase, CINAHL, and Google Scholar libraries with different Medical Subject Headings (MeSH). This search result revealed different numbers of publications under the search terms AI in dentistry (N=1289), AI role in the diagnosis of dental caries (N=4), AI in dental diagnostic and treatment planning (N=68), AI and dental caries (N=76), the future of dentistry with AI (N=5), and Machine learning in dentistry (N=668). A fast-emerging technology like AI can certainly replace manual dexterity in dentistry. To reduce errors and oversight, these technologies must also be used with caution and under human supervision. The fastest and most accurate diagnosis of oral diseases leads to better outcomes for patients.
Article
Full-text available
Federated Learning (FL), Artificial Intelligence (AI), and Explainable Artificial Intelligence (XAI) are the most trending and exciting technology in the intelligent healthcare field. Traditionally, the healthcare system works based on centralized agents sharing their raw data. Therefore, huge vulnerabilities and challenges are still existing in this system. However, integrating with AI, the system would be multiple agent collaborators who are capable of communicating with their desired host efficiently. Again, FL is another interesting feature, which works decentralized manner; it maintains the communication based on a model in the preferred system without transferring the raw data. The combination of FL, AI, and XAI techniques can be capable of minimizing several limitations and challenges in the healthcare system. This paper presents a complete analysis of FL using AI for smart healthcare applications. Initially, we discuss contemporary concepts of emerging technologies such as FL, AI, XAI, and the healthcare system. We integrate and classify the FL-AI with healthcare technologies in different domains. Further, we address the existing problems, including security, privacy, stability, and reliability in the healthcare field. In addition, we guide the readers to solving strategies of healthcare using FL and AI. Finally, we address extensive research areas as well as future potential prospects regarding FL-based AI research in the healthcare management system.
Article
Full-text available
The wearable lower limb exoskeleton is a typical human-in-loop human–robot coupled system, which conducts natural and close cooperation with the human by recognizing human locomotion timely. Requiring subject-specific training is the main challenge of the existing approaches, and most methods have the problem of insufficient recognition. This paper proposes an integral subject-adaptive real-time Locomotion Mode Recognition (LMR) method based on GA-CNN for a lower limb exoskeleton system. The LMR method is a combination of Convolutional Neural Networks (CNN) and Genetic Algorithm (GA)-based multi-sensor information selection. To improve network performance, the hyper-parameters are optimized by Bayesian optimization. An exoskeleton prototype system with multi-type sensors and novel sensing-shoes is used to verify the proposed method. Twelve locomotion modes, which composed an integral locomotion system for the daily application of the exoskeleton, can be recognized by the proposed method. According to a series of experiments, the recognizer shows strong comprehensive abilities including high accuracy, low delay, and sufficient adaption to different subjects.
Article
Full-text available
This paper focuses on the study of Coronavirus Disease 2019 (COVID-19) X-ray image segmentation technology. We present a new multilevel image segmentation method based on the swarm intelligence algorithm (SIA) to enhance the image segmentation of COVID-19 X-rays. This paper first introduces an improved ant colony optimization algorithm, and later details the directional crossover (DX) and directional mutation (DM) strategy, XMACO. The DX strategy improves the quality of the population search, which enhances the convergence speed of the algorithm. The DM strategy increases the diversity of the population to jump out of the local optima (LO). Furthermore, we design the image segmentation model (MIS-XMACO) by incorporating two-dimensional (2D) histograms, 2D Kapur's entropy, and a nonlocal mean strategy, and we apply this model to COVID-19 X-ray image segmentation. Benchmark function experiments based on the IEEE CEC2014 and IEEE CEC2017 function sets demonstrate that XMACO has a faster convergence speed and higher convergence accuracy than competing models, and it can avoid falling into LO. Other SIAs and image segmentation models were used to ensure the validity of the experiments. The proposed MIS-XMACO model shows more stable and superior segmentation results than other models at different threshold levels by analyzing the experimental results.
Article
Full-text available
Present-day intelligent healthcare applications offer digital healthcare services to users in a distributed manner. The Internet of Healthcare Things (IoHT) is the mechanism of the Internet of Things (IoT) found in different healthcare applications, with devices that are attached to external fog cloud networks. Using different mobile applications connecting to cloud computing, the applications of the IoHT are remote healthcare monitoring systems, high blood pressure monitoring, online medical counseling, and others. These applications are designed based on a client–server architecture based on various standards such as the common object request broker (CORBA), a service-oriented architecture (SOA), remote method invocation (RMI), and others. However, these applications do not directly support the many healthcare nodes and blockchain technology in the current standard. Thus, this study devises a potent blockchain-enabled socket RPC IoHT framework for medical enterprises (e.g., healthcare applications). The goal is to minimize service costs, blockchain security costs, and data storage costs in distributed mobile cloud networks. Simulation results show that the proposed blockchain-enabled socket RPC minimized the service cost by 40%, the blockchain cost by 49%, and the storage cost by 23% for healthcare applications.
Article
Full-text available
COVID-19 is currently raging worldwide, with more patients being diagnosed every day. It usually is diagnosed by examining pathological photographs of the patient's lungs. There is a lot of detailed and essential information on chest radiographs, but manual processing is not as efficient or accurate. As a result, how efficiently analyzing and processing chest radiography of COVID-19 patients is an important research direction to promote COVID-19 diagnosis. To improve the processing efficiency of COVID-19 chest films, a multilevel thresholding image segmentation (MTIS) method based on an enhanced multiverse optimizer (CCMVO) is proposed. CCMVO is improved from the original Multi-Verse Optimizer by introducing horizontal and vertical search mechanisms. It has a more assertive global search ability and can jump out of the local optimum in optimization. The CCMVO-based MTIS method can obtain higher quality segmentation results than HHO, SCA, and other forms and is less prone to stagnation during the segmentation process. To verify the performance of the proposed CCMVO algorithm, CCMVO is first compared with DE, MVO, and other algorithms by 30 benchmark functions; then, the proposed CCMVO is applied to image segmentation of COVID-19 chest radiography; finally, this paper verifies that the combination of MTIS and CCMVO is very successful with good segmentation results by using the Feature Similarity Index (FSIM), the Peak Signal to Noise Ratio (PSNR), and the Structural Similarity Index (SSIM). Therefore, this research can provide an effective segmentation method for a medical organization to process COVID-19 chest radiography and then help doctors diagnose coronavirus pneumonia (COVID-19).
Article
Full-text available
Urinary tract infections (UTIs) are among the most common infections occurring across all age groups. UTIs are a well-known cause of acute morbidity and chronic medical conditions. The current diagnostic methods of UTIs remain sub-optimal. The development of better diagnostic tools for UTIs is essential for improving treatment and reducing morbidity. Artificial intelligence (AI) is defined as the science of computers where they have the ability to perform tasks commonly associated with intelligent beings. The objective of this study was to analyze current views regarding attempts to apply artificial intelligence techniques in everyday practice, as well as find promising methods to diagnose urinary tract infections in the most efficient ways. We included six research works comparing various AI models to predict UTI. The literature examined here confirms the relevance of AI models in UTI diagnosis, while it has not yet been established which model is preferable for infection prediction in adult patients. AI models achieve a high performance in retrospective studies, but further studies are required.
Article
Full-text available
The present study employs density functional theory-based first principle calculation to investigate the electron transport properties of polyaniline following exposure to acidic and alkaline pH. In-situ deposited polyaniline-based paper device maintains emeraldine salt form while it is exposed to acidic pH and converts to emeraldine base when it is subjected to alkaline pH solutions. These structural changes at acidic and alkaline pH are validated experimentally by Raman spectra. Furthermore, the Raman spectra computed from density functional theory are validated with the experimental spectra. The changes in the theoretical energy band gap of polyaniline obtained from first principle calculations were correlated with the changes in the experimental impedimetric response of the sensor after exposure to acidic and alkaline solutions. Finally, the impedimetric responses were used to predict urine pH through a machine learning based smart and interactive web application. Different machine learning based regression models were implemented to acquire the best possible outcome. Gradient Boosting Regressor with least square loss model was selected as it showed lowest mean square, mean absolute, and root mean square error than other models. The smart sensing platform successfully predicts the unknown pH of urine samples with an average accuracy of more than 98%. The locally deployed smart web app can be accessed within a local area network by the end-user, which holds promise towards effective detection of urinary pH.
Article
Full-text available
Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edgedevices while keeping private user data strictly on device. In this work, motivated from ensuring fairness among users and robustness against malicious adversaries, we formulate federated learning as multi-objective optimization and propose a new algorithm FedMGDA+ that is guaranteed to converge to Pareto stationary solutions. FedMGDA+ is simple to implement, has fewer hyperparameters to tune, and refrains from sacrificing the performance of any participating user. We establish the convergence properties of FedMGDA+ and point out its connections to existing approaches. Extensive experiments on a variety of datasets confirm that FedMGDA+ compares favorably against state-of-the-art.
Article
Full-text available
Portfolio optimization is about building an investment decision on a set of candidate assets with finite capital. Generally, investors should devise rational compromise to return and risk for their investments. Therefore, it can be cast as a biobjective problem. In this work, both the expected return and conditional value-at-risk (CVaR) are considered as the optimization objectives. Although the objective of CVaR can be optimized with existing techniques such as linear programming optimizers, the involvement of practical constraints induces challenges to exact mathematical methods. Hence, we propose a new algorithm named F-MOEA/D, which is based on a Pareto front evolution strategy and the decomposition based multiobjective evolutionary algorithm. This strategy involves two major components, i.e., constructing local Pareto fronts through exact methods and picking the best one via decomposition approaches. The empirical study shows F-MOEA/D can obtain better approximations of the test instances against several alternative multiobjective evolutionary algorithms with a same time budget. Meanwhile, on two large instances with 7964 and 9090 assets, F-MOEA/D still performs well given that a multiobjective mathematical method does not finish in 7 days.
Article
Full-text available
Mobile-cloud-based healthcare applications are increasingly growing in practice. For instance, healthcare, transport, and shopping applications are designed on the basis of the mobile cloud. For executing mobile-cloud applications, offloading and scheduling are fundamental mechanisms. However, mobile healthcare workflow applications with these methods are widely ignored, demanding applications in various aspects for healthcare monitoring, live healthcare service, and biomedical firms. However, these offloading and scheduling schemes do not consider the workflow applications’ execution in their models. This paper develops a lightweight secure efficient offloading scheduling (LSEOS) metaheuristic model. LSEOS consists of light weight, and secure offloading and scheduling methods whose execution offloading delay is less than that of existing methods. The objective of LSEOS is to run workflow applications on other nodes and minimize the delay and security risk in the system. The metaheuristic LSEOS consists of the following components: adaptive deadlines, sorting, and scheduling with neighborhood search schemes. Compared to current strategies for delay and security validation in a model, computational results revealed that the LSEOS outperformed all available offloading and scheduling methods for process applications by 10% security ratio and by 29% regarding delays. Keywords neighborhood search; secure offloading; dynamic approaches; workflow healthcare applications; LSEOS; healthcare; scheduling
Article
Full-text available
[S U M M A R Y] Weakly supervised segmentation for medical images ease the reliance of models on pixel-level annotation while advancing the field of computer-aided diagnosis. However, the differences in nodule size in thyroid ultrasound images and the limitations of class activation maps in weakly supervised segmentation methods typically lead to under- and/or over-segmentation problems in real predictions. To alleviate this problem, we propose a weakly supervised segmentation neural network approach. This new method is based on a dual branch soft erase module that expands the foreground response region while constraining the erroneous expansion of the foreground region by the enhancement of background features. The sensitivity of this neural network to the nodule scale size is further enhanced by the scale feature adaptation module, which in turn generates integral and high-quality segmentation masks. In addition, while the nodule area can be significantly expanded through soft erase module and scale feature adaptation module, the activation effect in the nodule edge area is still not satisfactory, so that we further add an edge-based attention mechanism to strengthen the nodule edge segmentation effect. The results of experiments performed on the thyroid ultrasound image dataset showed that our new approach significantly outperformed existing weakly supervised semantic segmentation methods, e.g., 5.9% and 6.3% more accurate than the second-based results in terms of Jaccard and Dice coefficients, respectively.
Article
Full-text available
The number of automobiles has rapidly increased in recent years. To broaden inhabitant's travel options, push transportation infrastructures to their limitations. With the rapid expansion of vehicles, traffic congestion and car accidents are all common occurrences in the city. The Internet of drone vehicle things (IoDV) has developed a new paradigm for improving traffic situations in urban areas. However, edge computing has the following issues such as fault-tolerant and security-enabled delay optimal workload assignment. The study formulates the workload assignment problem for IoV applications based on linear integer programming. The study devises the fault-tolerant and security delay optimal workload assignment (SFDWA) schemes that determine optimal workload assignment in edge computing. The goal is to minimize average response time, which combines network, computation, security, and fault-tolerant delay. Simulation results show that the proposed schemes gain 15% optimal workload assignment for IoV application compared to existing studies.
Article
Full-text available
Crime prediction models are very useful for the police force to prevent crimes from happening and to reduce the crime rate of the city. Existing crime prediction models are not efficient in handling the data imbalance and have an overfitting problem. In this research, an adaptive DRQN model is proposed to develop a robust crime prediction model. The proposed adaptive DRQN model includes the application of GRU instead of LSTM unit to store the relevant features for the effective classification of Sacramento city crime data. The storage of relevant features for a long time helps to handle the data imbalance problem and irrelevant features are eliminated to avoid overfitting problems. Adaptive agents based on the MDP are applied to adaptively learn the input data and provide effective predictions. The reinforcement learning method is applied in the proposed adaptive DRQN model to select the optimal state value and to identify the best reward value. The proposed adaptive DRQN model has an MAE of 36.39 which is better than the existing Recurrent Q-Learning model has 38.82 MAE.
Article
Full-text available
Medical image segmentation, a complex and fundamental step in medical image processing, can help doctors make more precise decisions on patient diagnosis. Although multi-threshold image segmentation is the most exceptionally fundamental image segmentation technology, it requires complex computing and tends to yield unsatisfactory segmentation results, leading to its limited applications. To solve this problem, in this study, an ensemble multi strategy-driven shuffled frog leaping algorithm with horizontal and vertical crossover search (HVSFLA) is designed for multi-threshold image segmentation. Specifically, a horizontal crossover search enables different frogs to exchange information and guarantee the compelling exploration of each frog. Meanwhile, a vertical crossover search can make frogs in stagnation continue to search actively. Therefore, a better balance between diversification and intensification can be ensured. To evaluate its performance, HVSFLA was compared with a range of state-of-the-art algorithms using CEC 2017 benchmark functions. Furthermore, the performance of HVSFLA was also proved on several Berkeley segmentation datasets 500 (BSDS500). Finally, the proposed algorithm was applied to breast invasive ductal carcinoma cases based on multi-threshold segmentation technique using a non-local means 2D histogram integrated with Kapur's entropy. The experimental results demonstrate that the proposed HVSFLA outperforms a broad array of similar competitors, and thus it has a great potential to be used for medical image segmentation.
Article
Full-text available
Gold nanorods assisted photothermal therapy (GNR-PTT) is a new cancer treatment technique that has shown promising potential for bladder cancer treatment. The position of the bladder cancer at different locations along the bladder wall lining can potentially affect the treatment efficacy since laser is irradiated externally from the skin surface. The present study investigates the efficacy of GNR-PTT in the treatment of bladder cancer in mice for tumours growing at three different locations on the bladder, i.e., Case 1: closest to skin surface, Case 2: at the bottom half of the bladder, and Case 3: at the side of the bladder. Investigations were carried out numerically using an experimentally validated framework for optical-thermal simulations. An in-silico approach was adopted due to the flexibility in placing the tumour at a desired location along the bladder lining. Results indicate that for the treatment parameters considered (laser power 0.3 W, GNR volume fraction 0.01% v/v), only Case 1 can be used for an effective GNR-PTT. No damage to the tumour was observed in Cases 2 and 3. Analysis of the thermo-physiological responses showed that the effectiveness of GNR-PTT in treating bladder cancer depends not only on the depth of the tumour from the skin surface, but also on the type of tissue that the laser must pass through before reaching the tumour. In addition, the results are reliant on GNRs with a diameter of 10 nm and an aspect ratio of 3.8 - tuned to exhibit peak absorption for the chosen laser wavelength. Results from the present study can be used to highlight the potential for using GNR-PTT for treatment of human bladder cancer. It appears that Cases 2 and 3 suggest that GNR-PTT, where the laser passes through the skin to reach the bladder, may be unfeasible in humans. While this study shows the feasibility of using GNRs for photothermal ablation of bladder cancer, it also identifies the current limitations needed to be overcome for an effective clinical application in the bladder cancer patients.
Article
Full-text available
The combination of computer algorithms to diagnose clinical images has attracted more and more attention. This research aims to improve the efficiency of gastric cancer (GC) diagnosis, so deep learning (DL) algorithms are tentatively used to assist doctors in the diagnosis of gastric cancer. In the experiment, the collected 3591 gastroscopic images were divided into network training set and experimental verification test set. The lesion samples in the image are all marked by many endoscopists with many years of clinical experience. In order to improve the experimental effect, 5261 endoscopic images were obtained by expanding the training set. Then the obtained training set is input into the convolutional neural network (CNN) for training, and finally get the algorithm model DLU-Net. By inputting 598 test set samples into the CNN constructed in this paper, five results such as advanced GC, early GC, precancerous lesions, normal and benign lesions can be identified and output, with a total accuracy of 94.1%. It can be concluded that the DL algorithm model constructed in this paper can effectively identify the staging characteristics of cancer as well as gastroscopic images, greatly improve efficiency, and effectively assist physicians in the diagnosis of GC under gastroscopy.
Article
Full-text available
Because of its simplicity and effectiveness, fuzzy K-nearest neighbors (FKNN) is widely used in literature. The parameters have an essential impact on the performance of FKNN. Hence, the parameters need to be attuned to suit different problems. Also, choosing more representative features can enhance the performance of FKNN. This research proposes an improved optimization technique based on the sine cosine algorithm (LSCA), which introduces a linear population size reduction mechanism for enhancing the original algorithm's performance. Moreover, we developed an FKNN model based on the LSCA, it simultaneously performs feature selection and parameter optimization. Firstly, the search performance of LSCA is verified on the IEEE CEC2017 benchmark test function compared to the classical and improved algorithms. Secondly, the validity of the LSCA-FKNN model is verified on three medical datasets. Finally, we used the proposed LSCA-FKNN to predict lupus nephritis classes, and the model showed competitive results. The paper will be supported by an online web service for any question at https://aliasgharheidari.com.
Article
Full-text available
Torque control of electric drives is a challenging task, as high dynamics need to be achieved despite different input and state constraints while also pursuing secondary objectives, e.g., maximizing power efficiency. Whereas most state-of-the-art methods generally necessitate thorough knowledge about the system model, a model-free deep reinforcement learning torque controller is proposed. In particular, the deep Q-learning algorithm is utilized which has been successfully used in different application scenarios with a finite action set in the recent past. This nicely fits the considered system, a permanent magnet synchronous motor supplied by a two-level voltage source inverter, since the latter is a power supply unit with a limited amount of distinct switching states. This contribution investigates the deep Q-learning finite control set framework and its design, including the conception of a reward function that incorporates the demands concerning torque tracking, efficiency maximization and compliance with operation limits. In addition, a comprehensive hyperparameter optimization is presented, which addresses the many degrees of freedom of the deep Q-learning algorithm striving for an optimal controller configuration. Advantages and remaining challenges of the proposed algorithm are disclosed through an extensive validation, which includes a direct comparison with a state-of-the-art model predictive direct torque controller.
Article
Full-text available
Objective: The systemic immune-inflammation index (SII), an inexpensive and widely available hematologic marker of inflammation, has been linked to tumor progression, metastatic spread, and poor patient prognosis. The objective of this study is to explore the prognostic value of SII in patients with urinary system cancers (USCs). Materials and methods: A comprehensive literature search was conducted by searching the PubMed, EMBASE, Web of Science, Cochrane Library, Chinese National Knowledge Infrastructure (CNKI), and Wanfang databases from inception to May 10, 2020, to identify potential studies that assessed the prognostic role of the SII in USCs. The hazard ratio (HR) with a 95% confidence interval (CI) were used to evaluate the correlation between SII and overall survival (OS), progression-free survival (PFS), and cancer-specific survival (CSS) in USCs patients. Results: A total of 12 studies, including 2,693 USCs patients, were eventually included in the meta-analysis. Elevated SII index was significantly associated with poor OS (HR=1.28, 95% CI: 1.17-1.39, p<0.001), PFS (HR=1.51, 95% CI: 1.25-1.82, p<0.001) and CSS (HR=3.42, 95% CI: 1.49-7.91, p<0.001). Furthermore, subgroup analysis indicated that higher SII than a cutoff value could predict poor OS in renal cell carcinoma (HR=1.23, p<0.001), prostate carcinoma (HR=1.95, p<0.001), bladder carcinoma (HR=5.40, p<0.001), testicular cancer (HR=6.09, p<0.001) and upper tract urothelial carcinoma (HR=2.19, p<0.001). Besides, these associations did not vary significantly by tumor subtypes and stages of USCs, sample sizes, study types, cutoff value defining elevated NLR, treatment methods, and NOS scores. Conclusions: SII may serve as a useful prognostic indicator in USCs and contribute to prognosis evaluation and treatment strategy formulation. However, more well-designed studies are warranted to verify our findings.
Article
Full-text available
Autonomous surfaces vehicles (ASVs) excel at monitoring and measuring aquatic nutrients due to their autonomy, mobility, and relatively low cost. When planning paths for such vehicles, the task of patrolling with multiple agents is usually addressed with heuristics approaches, such as Reinforcement Learning (RL), because of the complexity and high dimensionality of the problem. Not only do efficient paths have to be designed, but addressing disturbances in movement or the battery’s performance is mandatory. For this multiagent patrolling task, the proposed approach is based on a centralized Convolutional Deep Q-Network, designed with a final independent dense layer for every agent to deal with scalability, with the hypothesis/assumption that every agent has the same properties and capabilities. For this purpose, a tailored reward function is created which penalizes illegal actions (such as collisions) and rewards visiting idle cells (cells that remains unvisited for a long time). A comparison with various multiagent Reinforcement Learning (MARL) algorithms has been done (Independent Q-Learning, Dueling Q-Network and multiagent Double Deep Q-Learning) in a case-study scenario like the Ypacaraí lake in Asunción (Paraguay). The training results in multiagent policy leads to an average improvement of 15% compared to lawn mower trajectories and a 6% improvement over the IDQL for the case-study considered. When evaluating the training speed, the proposed approach runs three times faster than the independent algorithm.
Article
Full-text available
Location-based services (LBS) have become an important part of people’s daily life. However, while providing great convenience for mobile users, LBS result in a serious problem on personal privacy, i.e., location privacy and query privacy. However, existing privacy methods for LBS generally take into consideration only location privacy or query privacy, without considering the problem of protecting both of them simultaneously. In this paper, we propose to construct a group of dummy query sequences, to cover up the query locations and query attributes of mobile users and thus protect users’ privacy in LBS. First, we present a client-based framework for user privacy protection in LBS, which requires not only no change to the existing LBS algorithm on the server-side, but also no compromise to the accuracy of a LBS query. Second, based on the framework, we introduce a privacy model to formulate the constraints that ideal dummy query sequences should satisfy: (1) the similarity of feature distribution, which measures the effectiveness of the dummy query sequences to hide a true user query sequence; and (2) the exposure degree of user privacy, which measures the effectiveness of the dummy query sequences to cover up the location privacy and query privacy of a mobile user. Finally, we present an implementation algorithm to well meet the privacy model. Besides, both theoretical analysis and experimental evaluation demonstrate the effectiveness of our proposed approach, which show that the location privacy and attribute privacy behind LBS queries can be effectively protected by the dummy queries generated by our approach.
Article
In virtualized cloud computing systems, energy reduction is a serious concern since it can offer many major advantages, such as reducing running costs, increasing system efficiency, and protecting the environment. At the same time, an energy-efficient task scheduling strategy is a viable way to meet these goals. Unfortunately, mapping cloud resources to user requests to achieve good performance by minimizing the energy consumption of cloud resources within a user-defined deadline is a huge challenge. This paper proposes Energy and Performance-Efficient Task Scheduling Algorithm (EPETS) in a heterogeneous virtualized cloud to resolve the issue of energy consumption. There are two stages in the proposed algorithm: initial scheduling helps to reduce execution time and satisfy task deadlines without considering energy consumption, and the second stage task reassignment scheduling to find the best execution location within the deadline limit with less energy consumption. Moreover, to make a reasonable balance between task scheduling and energy saving, we suggest an energy-efficient task priority system. The simulation results show that, compared to current energy-efficient scheduling methods of RC-GA, AMTS, and E-PAGA, the proposed solution helps to reduce significant energy consumption and improve performance by 5%-20% with deadline constraint satisfied.
Article
Remote photoplethysmography (rPPG) has been an active research topic in recent years. While most existing methods are focusing on eliminating motion artifacts in the raw traces obtained from single-scale region-of-interest (ROI), it is worth noting that there are some noise signals that cannot be effectively separated in single-scale space but can be separated more easily in multi-scale space. In this paper, we analyze the distribution of pulse signal and motion artifacts in different layers of a Gaussian pyramid. We propose a method that combines multi-scale analysis and neural network for pulse extraction in different scales, and a layer-wise attention mechanism to adaptively fuse the features according to signal strength. In addition, we propose spatial-temporal joint attention module and channel-temporal joint attention module to learn and exaggerate pulse features in the joint spaces, respectively. The proposed remote pulse extraction network is called Joint Attention and Multi-Scale fusion Network (JAMSNet). Extensive experiments have been conducted on two publicly available datasets and one self-collected dataset. The results show that the proposed JAMSNet shows better performance than state-of-the-art methods.
Article
Due to their insufficient generalization ability, iris segmentation algorithms based on deep learning cannot accurately segment iris images without corresponding ground truth (GT) data. Moreover, prior to recognition, the segmented image requires normalization to reduce the influence of pupil deformation. However, normalization of nonconnected iris regions will introduce noise, thereby decreasing the recognition rate. This paper proposes an end-to-end unified framework based on deep learning that does not include normalization in order to achieve improved accuracy in iris segmentation and recognition. In this framework, a multiattention dense connection network (MADNet) and dense spatial attention network (DSANet) are designed for iris segmentation and recognition, respectively. Finally, numerous ablation experiments are conducted to demonstrate the effectiveness of MADNet and DSANet. Experiments on three employed databases show that our proposed method achieves the best segmentation and recognition performance on low-quality iris images without corresponding GT data.
Article
In recent years, sEMG (surface electromyography) signals have been increasingly used to operate wearable devices. The development of mechanical lower limbs or exoskeletons controlled by the nervous system requires greater accuracy in recognizing lower limb activity. There is less research on devices to assist the human body in uphill movements. However, developing controllers that can accurately predict and control human upward movements in real-time requires the employment of appropriate signal pre-processing methods and prediction algorithms. For this purpose, this paper investigates the effects of various sEMG pre-processing methods and algorithms on the prediction results. This investigation involved ten subjects (five males and five females) with normal knee joints. The relevant data of the subjects were collected on a constructed ramp. To obtain feature values that reflect the gait characteristics, an improved PCA algorithm based on the kernel method is proposed for dimensionality reduction to remove redundant information. Then, a new model (CNN + LSTM) was proposed to predict the knee joint angle. Multiple approaches were utilized to validate the superiority of the improved PCA method and CNN-LSTM model. The feasibility of the method was verified by analyzing the gait prediction results of different subjects. Overall, the prediction time of the method was 25 ms, and the prediction error was 1.34 ± 0.25 deg. By comparing with traditional machine learning methods such as BP (backpropagation) neural network, RF (random forest), and SVR (support vector machine), the improved PCA algorithm processed data performed the best in terms of convergence time and prediction accuracy in CNN-LSTM model. The experimental results demonstrate that the proposed method (improved PCA + CNN-LSTM) effectively recognizes lower limb activity from sEMG signals. For the same data input, the EMG signal processed using the improved PCA method performed better in terms of prediction results. This is the first step toward myoelectric control of aided exoskeleton robots using discrete decoding. The study results will lead to the future development of neuro-controlled mechanical exoskeletons that will allow troops or disabled individuals to engage in a greater variety of activities.
Article
In this paper, we present a general optimization framework that leverages structured sparsity to achieve superior recovery results. The traditional method for solving the structured sparse objectives based on $\ell _{2,0}$ -norm is to use the $\ell _{2,1}$ -norm as a convex surrogate. However, such an approximation often yields a large performance gap. To tackle this issue, we first provide a framework that allows for a wide range of surrogate functions (including non-convex surrogates), which exhibits better performance in harnessing structured sparsity. Moreover, we develop a fixed point algorithm that solves a key underlying non-convex structured sparse recovery optimization problem to global optimality with a guaranteed super-linear convergence rate. Building on this, we consider three specific applications, i.e., outlier pursuit, supervised feature selection, and structured dictionary learning, which can benefit from the proposed structured sparsity optimization framework. In each application, how the optimization problem can be formulated and thus be relaxed under a generic surrogate function is explained in detail. We conduct extensive experiments on both synthetic and real-world data and demonstrate the effectiveness and efficiency of the proposed framework.
Article
Niemann-Pick Class 1 (NPC1) disease is a rare and debilitating neurodegenerative lysosomal storage disease (LSD). Metabolomics datasets of NPC1 patients available to perform this type of analysis are often limited in the number of samples and severely unbalanced. In order to improve the predictive capability and identify new biomarkers in an NPC1 disease urinary dataset, data augmentation (DA) techniques based on computational intelligence have been employed to create synthetic samples, i.e. the addition of noise, oversampling techniques and conditional generative adversarial networks. These techniques have been used to evaluate their predictive capacities on a set of urine samples donated by 13 untreated NPC1 disease and 47 heterozygous (parental) carrier control participants. Results on the prediction have also been obtained using different machine learning classification models and the partial least squares techniques. These results provide strong evidence for the ability of DA techniques to generate good quality synthetic data. Results acquired show increases in sensitivity of 20%–50%, an F1 score of 6%–30%, and a predictive capacity of 0.3 (out of 1). Additionally, more conventional forms of multivariate data analysis have been employed. These have allowed the detection of unusual urinary metabolite profiles, and the identification of biomarkers through the use of synthetically augmented datasets. Results indicate that urinary branched-chain amino acids such as valine, 3-aminoisobutyrate and quinolinate, may be employable as valuable biomarkers for the diagnosis and prognostic monitoring of NPC1 disease.
Article
In this article, a curious phenomenon in the tensor recovery algorithm is considered: can the same recovered results be obtained when the observation tensors in the algorithm are transposed in different ways? If not, it is reasonable to imagine that some information within the data will be lost for the case of observation tensors under certain transpose operators. To solve this problem, a new tensor rank called weighted tensor average rank (WTAR) is proposed to learn the relationship between different resulting tensors by performing a series of transpose operators on an observation tensor. WTAR is applied to three-order tensor robust principal component analysis (TRPCA) to investigate its effectiveness. Meanwhile, to balance the effectiveness and solvability of the resulting model, a generalized model that involves the convex surrogate and a series of nonconvex surrogates are studied, and the corresponding worst case error bounds of the recovered tensor is given. Besides, a generalized tensor singular value thresholding (GTSVT) method and a generalized optimization algorithm based on GTSVT are proposed to solve the generalized model effectively. The experimental results indicate that the proposed method is effective.
Chapter
The machine learning model learns from a set of training datasets that have some input features and their respective output in the supervised group. This trained/learned model is then used to forecast the outcome of the collection of test data. The machine-based learning model has recently been widely used in a variety of classification problems in the medical field to diagnose diseases. This study aims to develop a model of machine learning to find out a correct prediction of two urinary-related diseases: acute urinary bladder inflammation and acute renal pelvic nephritis when the characteristics of these diseases are given. The dataset used in this study includes about 120 records of different patients and has six independent variables and one dependent variable (target). In this study, it was observed that supervised model-based machine learning (XGBoost and random forest) can be applied effectively to predict these two urinary diseases. For this classification problem, both models achieved 100 percent accuracy; the result was further validated through k-fold cross-validation, where k = 10.
Chapter
The delay in predicting sepsis is possibly life-threatening and to procure restrictions in medical resources. The prediction of sepsis in patients having no sepsis and prediction of sepsis in the earliest for patients having sepsis is the concept of predicting the sepsis with good quality acts as the key source of this work. The foremost objective is to create a platform in this digitized environment using the Machine Learning and Deep Learning model to detect the deterioration of sepsis in the patient before it’s too late. This work aims to predict sepsis sequentially using Long Short Term Memory (LSTM) an RNN model that helps to learn the order dependencies and clinical detection of sepsis 6 h before using the clinical data. Further Reinforcement Learning is applied to label the data for accurate matching.KeywordsSepsisReinforcement learningLSTMRNN
Article
Pixel-level annotation for supervised learning tends to be tedious and inaccurate when facing complex scenes, leading models to produce incomplete predictions, especially on self shadows and shadow boundaries. This paper presents a weakly supervised graph convolutional network (namely WSGCN) for generating an accurate pseudo shadow mask from only several annotation scribbles. Given these limited annotations and a priori superpixel segmentation mask, we seek a robust graph construction strategy and a label propagation method. Specifically, our network operates on superpixel-graphs, allowing us to reduce the data dimensions by several magnitudes. Then, under the guidance of scribbles, we formulate the generation of a shadow mask as a weakly supervised learning task and learn a 2-layer graph convolutional network (GCN) separately for each training image. Experimental results on the benchmark datasets SBU and ISTD, show that our network can achieve impressive performance by using a few thousand parameters, and training our re-annotated data can further improve the performance of the-state-of-art detectors.
Article
Objective This study investigates the effects of vesicoureteral reflux (VUR) in the upper and lower urinary tracts with and without ureteral stenosis and with a double J stent (DJS). Methods The entire length of the urinary tract with an implanted DJS was modeled. To assess the possibility of VUR, the measured values were used as boundary conditions for the baseline, the maximum cystometric bladder capacity (MCBC) during the filling phase, and maximum vesical pressure during the voiding phase were computed. The flow rates, flow patterns, wall shear stress (WSS) distribution, impact force induced by reflux urination, and helicity of the bladder were investigated for the urinary system. Results The flow from the bladder to the renal pelvis was detected at maximum vesical pressure (75 cmH2O) during the voiding phase, and a small amount (1.09 mL/s) of VUR was noted at the MCBC during the filling phase. The WSS increased when the reflux was large. Helicity within the bladder varied with the stenosis as well as opening and closing of the urethra. The reflux within the stent was reduced by 40% by inserting a ball into the stent. Conclusion The main VUR factor was the opening and closing of the vesicoureteric junction by the detrusor muscle. The largest urine reflux (11.7 mL/s) to the kidney occurred when the detrusor muscle was relaxed. Significance Ureteral stenosis affected the VUR and reduced urine reflux. Ball insertion in the stent reduced urine reflux through the stent lumen.
Article
Background Perioperative acute kidney injury (AKI) is challenging to predict and a common complication of lower limb arthroplasties. Our aim was to create a machine learning model to predict AKI defined by both serum creatinine (sCr) levels and urine output (UOP) and to investigate which features are important for building the model. The features were divided into preoperative, intraoperative, and postoperative feature sets. Methods This retrospective, register-based study assessed 648 patients who underwent primary knee or hip replacement at Oulu University Hospital, Finland, between 1/2016 and 2/2017. The RUSBoost algorithm was chosen to establish the models, and it was compared to Naïve/Kernel Bayes and support vector machine (SVM). Models of AKI classified by either sCr levels or UOP were established. All the models were trained and validated using a five-fold cross-validation approach. An external test set was not available at the time of this study. Results The performance of both the sCr level- and UOP-based AKI models improved when pre-, intra-, and postoperative features were used together. The best sCr level-based AKI model performed as follows: area under receiving operating characteristic (AUROC) of 0.91, (95% CI ± 0.02), area under precision-recall (AUPR) of 0.35 (95% CI ± 0.04) sensitivity of 0.88 (95% CI ± 0.03), specificity of 0.87 (95% CI ± 0.03), and precision o (95% CI ± 0.03). This model correctly classified 22 out of 25 patients with AKI. The best UOP-based AKI model performed as follows: AUROC of 0.98 (95% CI ± 0.02), AUPR of 0.48 (95% CI ± 0.04), sensitivity of 0.88 (95% CI ± 0.02), specificity of 0.93 (95% CI ± 0.03), and precision of 0.34 (95% CI ± 0.04). This model correctly classified 23 out of 26 patients with AKI. In the sCr-AKI models, estimated glomerular filtration rate (eGFR)-related features were most important, and in the UOP-based AKI models, UOP-related features were most important. Other important and recurring features in the models were age, sex, body mass index, ASA status, operation type, preoperative eGFR, and preoperative sCr level. Naïve/Kernel Bayes performed similarly to RUSBoost. SVM performed poorly. Conclusions The performance of the models improved after the inclusion of intra- and postoperative features with preoperative features. The results of our study are not generalizable, and additional larger studies are needed. The optimal ML method for this kind of data is still an open research question.
Article
Background Urinary tract infection (UTI) is one of major nosocomial infections significantly affecting the outcomes of immobile stroke patients. Previous studies have identified several risk factors, but it’s still challenging to accurately estimate personal UTI risk. Objectives We aimed to develop predictive models for UTI risk identification for immobile stroke patients. Methods Research data were collected from our previous multi-centre study. Derivation cohort included 3982 immobile stroke patients collected from November 1, 2015 to June 30, 2016; external validation cohort included 3837 patients collected from November 1, 2016 to July 30, 2017. 6 machine learning models and an ensemble learning model were derived based on 80% of derivation cohort and effectiveness was evaluated with the remaining 20%. We used Shapley additive explanation values to determine feature importance and examine the clinical significance of prediction models. Results 2.59% (103/3982) patients were diagnosed with UTI in derivation cohort, 1.38% (53/3837) in external cohort. The ensemble learning model performed the best in area under the receiver operating characteristic (ROC) curve in internal validation (82.2%); second best in external validation (80.8%). In addition, the ensemble learning model performed the best sensitivity in both internal and external validation sets (80.9% and 81.1%, respectively). We also identified seven UTI risk factors (pneumonia, glucocorticoid use, female sex, mixed cerebrovascular disease, increased age, prolonged length of stay, and duration of catheterization). Conclusions Our ensemble learning model demonstrated promising performance. Future work should continue to develop a more concise scoring tool based on machine learning models and prospectively examining the model in practical use, thus improving clinical outcomes.
Article
Objective The diagnosis of bladder dysfunction for children depends on the confirmation of abnormal bladder shape and bladder compliance. The existing gold standard needs to conduct voiding cystourethrogram (VCUG) examination and urodynamic studies (UDS) examination on patients separately. To reduce the time and injury of children's inspection, we propose a novel method to judge the bladder compliance by measuring the intravesical pressure during the VCUG examination without extra UDS. Methods Our method consisted of four steps. We firstly developed a single-tube device that can measure, display, store, and transmit real-time pressure data. Secondly, we conducted clinical trials with the equipment on a cohort of 52 patients (including 32 negative and 20 positive cases). Thirdly, we preprocessed the data to eliminate noise and extracted features, then we used the least absolute shrinkage and selection operator (LASSO) to screen out important features. Finally, several machine learning methods were applied to classify and predict the bladder compliance level, including support vector machine (SVM), Random Forest, XGBoost, perceptron, logistic regression, and Naive Bayes, and the classification performance was evaluated. Results 73 features were extracted, including first-order and second-order time-domain features, wavelet features, and frequency domain features. 15 key features were selected and the model showed promising classification performance. The highest AUC value was 0.873 by the SVM algorithm, and the corresponding accuracy was 84%. Conclusion We designed a system to quickly obtain the intravesical pressure during the VCUG test, and our classification model is competitive in judging patients’ bladder compliance. Significance This could facilitate rapid auxiliary diagnosis of bladder disease based on real-time data. The promising result of classification is expected to provide doctors with a reliable basis in the auxiliary diagnosis of some bladder diseases prior to UDS.
Article
This paper presents a novel Reinforcement Learning (RL)-based control approach that uses a combination of a Deep Q-Learning (DQL) algorithm and a metaheuristic Gravitational Search Algorithm (GSA). The GSA is employed to initialize the weights and the biases of the Neural Network (NN) involved in DQL in order to avoid the instability, which is the main drawback of the traditional randomly initialized NNs. The quality of a particular set of weights and biases is measured at each iteration of the GSA-based initialization using a fitness function aiming to achieve the predefined optimal control or learning objective. The data generated during the RL process is used in training a NN-based controller that will be able to autonomously achieve the optimal reference tracking control objective. The proposed approach is compared with other similar techniques which use different algorithms in the initialization step, namely the traditional random algorithm, the Grey Wolf Optimizer algorithm, and the Particle Swarm Optimization algorithm. The NN-based controllers based on each of these techniques are compared using performance indices specific to optimal control as settling time, rise time, peak time, overshoot, and minimum cost function value. Real-time experiments are conducted in order to validate and test the proposed new approach in the framework of the optimal reference tracking control of a nonlinear position servo system. The experimental results show the superiority of this approach versus the other three competing approaches.
Article
This paper describes a novel gait pattern recognition method based on Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) for lower limb exoskeleton. The Inertial Measurement Unit (IMU) installed on the exoskeleton to collect motion information, which is used for LSTM-CNN input. This article considers five common gait patterns, including walking, going up stairs, going down stairs, sitting down, and standing up. In the LSTM-CNN model, the LSTM layer is used to process temporal sequences and the CNN layer is used to extract features. To optimize the deep neural network structure proposed in this paper, some hyperparameter selection experiments were carried out. In addition, to verify the superiority of the proposed recognition method, the method is compared with several common methods such as LSTM, CNN and SVM. The results show that the average recognition accuracy can reach 97.78%, which has a good recognition effect. Finally, according to the experimental results of gait pattern switching, the proposed method can identify the switching gait pattern in time, which shows that it has good real-time performance.
Article
The health care building designing is a very difficult process in which there are a great quantity of parameters and variables that it is necessary to consider. The health care building should reach the population needs. Additionally, the design of this type of building and the related facilities involves a great quantity of regulations which they are adapted to the different countries. Thus, the health care facilities should be designed according to numerous and complex regulations. The checking of this regulation is very difficult and usually involves big teams of specialized engineers and architects. The proposed Case-Based Reasoning (CBR) and Reinforcement Learning can analyse the data about building design (provided in an Extended Mark-up Language or XML file, and other compatible formats), checking, and validating the regulations. This approach allows to reduce the specialized and high qualified personnel, providing a report with the checking regulations, and the traceability of warning and faults in the application of regulations.
Article
Introduction: Although intraoperative iatrogenic ureteric injuries (IUI) are rare, significant consequences can occur if they are unrecognized at the time. The focus of our study is to characterize the associated morbidity and identify predictors of delayed recognition of IUI. Methods: Sunnybrook Health Sciences Centre Research Ethics Board approved the study. Patients with a diagnosis of IUI between 2002 and 2020 were identified through an institutional electronic medical record system. Data pertaining to the demographic characteristics, diagnosis, and management of IUI and overall outcomes were collected retrospectively. Results: Of the 103 patients identified, 83% were female, 52% had previous abdominal surgery, and 18% had previous radiation. The median age was 67 (21-88) years. Twenty percent were not recognized at the time of surgery. Although delayed recognition was not a significant predictor for poor outcome after subsequent repair (i.e., hydronephrosis, ureteric stricture/obstruction), it was associated with substantial morbidity to the patient (i.e., additional procedures) and increased cost to the healthcare system (i.e., longer hospital stay, readmission to hospital). Patients who underwent laparoscopic surgery had an 11 times more likely chance of having an unrecognized IUI as compared to those who underwent open surgery (odds ratio 11.515, p=0.0001). Conclusions: Delayed recognition of IUI may be associated with considerable adverse effects. In this retrospective case series, we identified laparoscopic surgery as a significant predictor for delayed recognition of IUI. This information underscores the need for future studies to facilitate intraoperative identification of ureteric injuries, particularly during laparoscopic procedures.
Article
Rapid advancements in the internet of things (IoT) are driving massive transformations of health care, which is one of the largest and critical global industries. Recent pandemics, such as coronavirus 2019 (COVID-19), include increasing demands for ubiquitous, preventive, and personalized health care to be provided to the public at reduced risks and costs with rapid care. Mobile crowdsourcing could potentially meet the future massive health care IoT (mH-IoT) demands by enabling anytime, anywhere sense and analyses of health-related data to tackle such a pandemic situation. However, data reliability and availability are among the many challenges for the realization of next-generation mH-IoT, especially in COVID-19 epidemics. Therefore, more intelligent and robust health care frameworks are required to tackle such pandemics. Recently, reinforcement learning (RL) has proven its strengths to provide intelligent data reliability and availability. The action-state learning procedure of RL-based frameworks enables the learning system to enhance the optimal use of the information as the time passes and data increases. In this article, we propose an RL-based crowd-to-machine (RLC2M) framework for mH-IoT, which leverages crowdsourcing and an RL model (Q-learning) to address the health care information processing challenges. The simulation results show that the proposed framework rapidly converges with accumulated rewards to reveal the sensing environment situation.
Article
Mixed-Integer Non-Linear Programming (MINLP) is not rare in real-world applications such as portfolio investment. It has brought great challenges to optimization methods due to the complicated search space that has both continuous and discrete variables. This paper considers the multi-objective constrained portfolio optimization problems that can be formulated as MINLP problems. Since each continuous variable is dependent to a discrete variable, we propose a Compressed Coding Scheme (CCS), which encodes the dependent variables into a continuous one. In this manner, we can reuse some existing search operators and the dependence among variables will be utilized while the algorithm is optimizing the compressed variables. CCS actually bridges the gap between the portfolio optimization problems and the existing optimizers, such as Multi-Objective Evolutionary Algorithms (MOEAs). The new approach is applied to two benchmark suites, involving the number of assets from 31 to 2235. The experimental results indicate that CCS is not only efficient but also robust for dealing with the multi-objective constrained portfolio optimization problems.
Chapter
In the medical or healthcare industry, where, the already available information or data is never sufficient, excellence can be performed with the help of Federated Learning (FL) by empowering AI models to learn on private data without conceding privacy. It opened the door for ample research because of its high level of communication efficiency which is linked with distributed training problems. The primary objective of the chapter is to highlight the adaptability and working of the FL techniques in the healthcare system especially in drug development, clinical diagnosis, digital health monitoring, and various disease predictions and detection system. The first section of the chapter is comprised of a background study on an FL framework for healthcare, FL working model in healthcare, and various important benefits of FL. The next section of the chapter described the reported work which highlights different research works in the field of electronic health record systems, drug discovery, and disease prediction systems using the FL model. The final section of the chapter presented the comparative analysis, which shows the comparison between different FL algorithms for different health sectors by using parameters such as accuracy, the area under the curve, precision, recall, and F-score.
Article
Introduction We aimed to assess the power of radiomic features based on computed tomography to predict risk of chronic kidney disease in patients undergoing radiation therapy of abdominal cancers. Methods 50 patients were evaluated for chronic kidney disease 12 months after completion of abdominal radiation therapy. At the first step, the region of interest was automatically extracted using deep learning models in computed tomography images. Afterward, a combination of radiomic and clinical features was extracted from the region of interest to build a radiomic signature. Finally, six popular classifiers, including Bernoulli Naive Bayes, Decision Tree, Gradient Boosting Decision Trees, K-Nearest Neighbor, Random Forest, and Support Vector Machine, were used to predict chronic kidney disease. Evaluation criteria were as follows: accuracy, sensitivity, specificity, and area under the ROC curve. Results Most of the patients (58%) experienced chronic kidney disease. A total of 140 radiomic features were extracted from the segmented area. Among the six classifiers, Random Forest performed best with the accuracy and AUC of 94% and 0.99, respectively. Conclusion Based on the quantitative results, we showed that a combination of radiomic and clinical features could predict chronic kidney radiation toxicities. The effect of factors such as renal radiation dose, irradiated renal volume, and urine volume 24-hour on CKD was proved in this study.
Article
Background and objective To automatically identify and locate various types and states of the ureteral orifice (UO) in real endoscopy scenarios, we developed and verified a real-time computer-aided UO detection and tracking system using an improved real-time deep convolutional neural network and a robust tracking algorithm. Methods The single-shot multibox detector (SSD) was refined to perform the detection task. We trained both the SSD and Refined-SSD using 447 resectoscopy images with UO and tested them on 818 ureteroscopy images. We also evaluated the detection performance on endoscopy video frames, which comprised 892 resectoscopy frames and 1366 ureteroscopy frames. UOs could not be identified with certainty because sometimes they appeared on the screen in a closed state of peristaltic contraction. To mitigate this problem and mimic the inspection behavior of urologists, we integrated the SSD and Refined-SSD with five different tracking algorithms. Results When tested on 818 ureteroscopy images, our proposed UO detection network, Refined-SSD, achieved an accuracy of 0.902. In the video sequence analysis, our detection model yielded test sensitivities of 0.840 and 0.922 on resectoscopy and ureteroscopy video frames, respectively. In addition, by testing Refined-SSD on 1366 ureteroscopy video frames, the sensitivity achieved a value of 0.922, and a lowest false positive per image of 0.049 was obtained. For UO tracking performance, our proposed UO detection and tracking system (Refined-SSD integrated with CSRT) performed the best overall. At an overlap threshold of 0.5, the success rate of our proposed UO detection and tracking system was greater than 0.95 on 17 resectoscopy video clips and achieved nearly 0.95 on 40 ureteroscopy video clips. Conclusions We developed a deep learning system that could be used for detecting and tracking UOs in endoscopy scenarios in real time. This system can simultaneously maintain high accuracy. This approach has great potential to serve as an excellent learning and feedback system for trainees and new urologists in clinical settings.
Article
Performing complex tasks by soft robots in constrained environment remains an enormous challenge owing to the limitations of flexible mechanisms and control methods. In this paper, a novel biomimetic soft robot driven by Shape Memory Alloy (SMA) with light weight and multi-motion abilities is introduced. We adapt deep learning to perceive irregular targets in an unstructured environment. Aiming at the target searching task, an intelligent visual servo control algorithm based on Q-learning is proposed to generate distance-directed end effector locomotion. In particular, a threshold reward system for the target searching task is proposed to enable a certain degree of tolerance for pointing errors. In addition, the angular velocity and working space of the end effector with load and without load based on the established coupling kinematic model are presented. Our framework enables the trained soft robot to take actions and perform target searching. Realistic experiments under different conditions demonstrate the convergence of the learning process and effectiveness of the proposed algorithm.
Article
Background: Stage-specific guideline recommendations are lacking for chemotherapy in micropapillary carcinoma of the urinary bladder (MCUB). Objective: To test the efficacy of stage-specific chemotherapy for MCUB. Design, setting, and participants: Within the Surveillance, Epidemiology and End Results (SEER) registry (2001-2016), we identified patients with MCUB and pure urothelial carcinoma of the urinary bladder (UCUB) of all stages. Outcome measurements and statistical analysis: Kaplan-Meier survival analyses and multivariate Cox regression models were used to determine cancer-specific mortality (CSM) in addition to power analyses. Results and limitations: Of 210 491 patients of all stages, 518 (0.2%) harboured MCUB versus 209 973 (99.8%) UCUB. Stage at presentation was invariably higher in MCUB than in UCUB patients. Of the MCUB patients, 223 (43.1%) received chemotherapy versus 42 921 (20.4%) of the UCUB patients. In MCUB patients, chemotherapy improved CSM-free survival significantly in metastatic stage (hazard ratio [HR] 0.36, p = 0.04). Longer median CSM-free survival was also associated with chemotherapy use in addition to radical cystectomy (RC) versus RC alone in non-organ-confined MCUB (HR 0.69, p = 0.2). Additional power analyses revealed an underpowered comparison. Finally, no CSM difference was recorded in organ-confined MCUB according to the use of chemotherapy in addition to RC versus RC alone (HR 0.98, p = 1). Conclusions: Stage at presentation was invariably higher in MCUB than in UCUB patients. Very important CSM reduction was associated with chemotherapy use in metastatic MCUB. A promising protective effect of perioperative chemotherapy might also be applicable to non-organ-confined MCUB, but without sufficient statistical power. Conversely, no association was recorded in organ-confined MCUB. Patient summary: Patients with micropapillary carcinoma of the urinary bladder (MCUB) present in higher tumour stages than those with urothelial carcinoma of the urinary bladder. Chemotherapy for MCUB is effective in metastatic stages, but of no beneficial effect in organ-confined stage. In not-yet-metastatic but already non-organ-confined stages, we did not have enough observations to show a statistically significant protective effect of chemotherapy.
Article
Background and objective Chronic kidney disease is a worldwide health issue which includes not only kidney failure but also complications of reduced kidney functionality. Cyst formation, nephrolithiasis or kidney stone, and renal cell carcinoma or kidney tumor are the common kidney disorders which affects the functionality of kidneys. These disorders are typically asymptomatic, therefore early and automatic diagnosis of kidney disorders are required to avoid serious complications. Methods This paper proposes an automatic classification of B-mode kidney ultrasound images based on the ensemble of deep neural networks (DNNs) using transfer learning. The ultrasound images are usually affected by speckle noise and quality selection in the ultrasound image is based on perception-based image quality evaluator score. Three variant datasets are given to the pre-trained DNN models for feature extraction followed by support vector machine for classification. The ensembling of different pre-trained DNNs like ResNet-101, ShuffleNet, and MobileNet-v2 are combined and final predictions are done by using the majority voting technique. By combining the predictions from multiple DNNs the ensemble model shows better classification performance than the individual models. The presented method proved its superiority when compared to the conventional and DNN based classification methods. The developed ensemble model classifies the kidney ultrasound images into four classes, namely, normal, cyst, stone, and tumor. Results To highlight effectiveness of the proposed approach, the ensemble based approach is compared with the existing state-of-the-art methods and tested in the variants of ultrasound images like in quality and noisy conditions. The presented method resulted in maximum classification accuracy of 96.54% in testing with quality images and 95.58% in testing with noisy images. The performance of the presented approach is evaluated based on accuracy, sensitivity, and selectivity. Conclusions From the experimental analysis, it is clear that the ensemble of DNNs classifies the majority of images correctly and results in maximum classification accuracy as compared to the existing methods. This automatic classification approach is a supporting tool for the radiologists and nephrologists for precise diagnosis of kidney diseases.