ChapterPDF Available

An Overview of Clinical Decision Support System (CDSS) as a Computational Tool and Its Applications in Public Health

Authors:

Abstract and Figures

In this day and age with technology advancing rapidly, it has become possible to store and access tremendous amount of data at the touch of one’s fingertips. Diligent utilization of patient medical records is essential for making judicious clinical decisions and for providing health care of the highest order. Public health concerns are steadily increasing as a result of the expanding population. Hence, there is an exponential surge in the amount of data that requires processing. Big data tools that can efficiently minimize the processing time and eliminate errors are the need of the hour. The clinical decision support system (CDSS) is one such advancement that has been gaining traction in recent years. CDSS can be defined as “any electronic or non-electronic active knowledge system specifically designed to aid in clinical decision-making, in which parameters of individual patient health can be used to intelligently filter and generate patient-specific evaluations and assessments which serve as recommendations to clinicians during treatment, thereby enhancing patient care.” CDSS is an information technology tool that, depending on the patient’s input data, can give the assessments, prognosis and medical recommendations based on the nature of the medical condition. CDSS is a major player in the field of artificial intelligence in medicine. It is a revolutionary method that has the potential to galvanize the field of health care, as evidenced by statistical analysis and the multiple successful case studies that have been documented in this chapter.
Content may be subject to copyright.
A preview of the PDF is not available
Chapter
Medicine faces a dilemma regarding the use of AI: it can either choose not to use AI, potentially sacrificing the best care for patients, or embrace its use, which may lead to insurmountable challenges in properly attributing responsibility, in the form of a responsibility gap. In order to illustrate the responsibility gap, the following approach will be taken: Firstly, an analysis of three problem dimensions is necessary. The first dimension pertains to the nature of decision support systems. The second dimension encompasses various concepts and prerequisites of moral responsibility. The third dimension introduces a relatively new aspect in the debate about the responsibility gap, considering the physician and the clinical decision support system as a coupled cognitive system, as proposed in the extended mind theory. Subsequently, the existence of a responsibility gap will be demonstrated, along with an explanation of how it arises. The essay will conclude with a rather pessimistic outlook on the possibility of bridging this gap.
Article
Algorithmic decision-making systems (ADMS) are increasingly being used by public and private organizations to enact decisions traditionally made by human beings across a broad range of domains, including business, law enforcement, education, and healthcare. Their growing prevalence engenders profound ethical challenges, which, we maintain, should be examined in a structured and theoretically informed fashion. However, much of the ethical exploration of ADMS within the IS field draws upon an atheoretical application of ethics. In this paper, we argue that the “big three” ethical theories of consequentialism, deontology, and virtue ethics can inform a structured comparative analysis of the ethical significance of ADMS. We demonstrate the value of such an approach through an illustrative case study of an ADMS in use by an Australian bank. Building upon this analysis, we address four characteristics of ADMS from the three theoretical perspectives, provide guidance on the contexts within which the application of each theory might be particularly fruitful, and highlight the advantages of theoretically grounded ethical analyses of ADMS.
Article
Full-text available
The verb MAKE is a one of the most intriguing verbs in the English language. Not only does it occur in various contexts and situations, but it also conveys a cluster of meanings, depending on the context of its use. Rather than dealing with all uses and meanings of MAKE, I will devote this paper to the study of the semantics of causative MAKE, that is the uses of MAKE with the complementa-tion pattern: [NP VP NP VP]. 1 Drawing upon a corpus study of the occurrences of causative MAKE in the British component of the International Corpus of English, this paper challenges the widely-shared assumption that MAKE is a coercive verb, and highlights the polysemous nature of causative MAKE, which expresses a cluster of semantic values, depending on the lexical and conceptual properties of the causative situation.
Article
Full-text available
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.
Article
Full-text available
Missing data in datasets remain as a difficulty in terms of data analysis in various research fields, especially in the medical field, as it affects the treatment and diagnosis that the patient should receive. In this research, Fuzzy c-means (FCM) are used to impute the missing data. However, like in most data imputation methods, FCM do not consider the presence of irrelevant features. Irrelevant features can increase the computational time of the imputation process and decrease the accuracy of the prediction. Feature selection techniques can alleviate this problem by selecting the most relevant features and reducing the dataset size. Fuzzy principal component analysis (FPCA) is used as the feature selection method in this study as it considers the presence of outliers compared to classical PCA as outliers are the main reason some features renders irrelevant. Therefore, an improved hybrid imputation model of FPCA–Support vector machines–FCM (FPCA–SVM–FCM) has been proposed and employed in this study. The efficiency of the proposed model is investigated on one dataset which is Pima Indians Diabetes dataset. Experimental results showed that the proposed hybrid imputation model is better than the existing methods by producing a more accurate estimation in terms of accuracy, RMSE and MAE. The proposed method was also validated by using Wilcoxon rank sum and Theil’s U test and obtained good results compared to SVM–FCM. Therefore, it can be used as an alternative tool for handling missing data in order to obtain a better quality dataset.
Chapter
‘Water’ is one of the key components available to mankind. All living things consist mostly of water; e.g., the human body is made up of 67% of water. Water is the crucial component of life and is essential for sustenance, so as it is a most vital component for upgrading agricultural productivity, and hence, the utilization of water in a most efficient manner is the key concept we must follow to improve the farming/gardening in the nation. Sensoponics helps the farmers/gardeners to distribute water to crops/plants by providing them with water when they need the water, and this helps to prevent wastage of water and soil degradation. In this project, we will develop an automated smart monitoring and irrigation system that helps farmers/gardeners to know the status of their crops/plants from home or from any part of the world. This system helps farmer/gardeners to irrigate the land in a very organized manner based on soil humidity, atmospheric temperature and humidity, and water consumption of the plant. Water surplus irrigation reduces plant production and degrades soil fertility and stimuli ecological hazards like water wasting and soil degradation. The smart system not only provides comfort but also reduces energy conservation, efficiency, and time-saving. Nowadays, farmers are not financially stable to use industry graded automation and control machine which are high in cost. So, in this project, we will implement a concept of Internet of things (IoT) to read the data from sensors using Arduino Uno and send it to ThingSpeak, an open-source cloud to store and analyze the data of sensors.
Article
Researchers train and build specific models to classify the presence and absence of a disease and the accuracy of such classification models is continuously improved. The process of building a model and training depends on the medical data utilized. Various machine learning techniques and tools are used to handle different data with respect to disease types and their clinical conditions. Classification is the most widely used technique to classify disease and the accuracy of the classifier largely depends on the attributes. The choice of the attribute largely affects the diagnosis and performance of the classifier. Due to growing large volumes of medical data across different clinical conditions, the need for choosing relevant attributes and features still lacks method to handle datasets that target specific diseases. This study uses an ensemble-based feature selection using random trees and wrapper method to improve the classification. The proposed ensemble learning classification method derives a subset using the wrapper method, bagging, and random trees. The proposed method removes the irrelevant features and selects the optimal features for classification through probability weighting criteria. The improved algorithm has the ability to distinguish the relevant features from irrelevant features and improve the classification performance. The proposed feature selection method is evaluated using SVM, RF, and NB evaluators and the performances are compared against the FSNBb, FSSVMb, GASVMb, GANBb, and GARFb methods. The proposed method achieves mean classification accuracy of 92% and outperforms the other ensemble methods.
Article
Background and aim One of the prerequisites to develop Computerised Decision Support Systems is Clinical Practice Guidelines (CPGs) which provide a systematic aid to make complex medical decisions. In order to provide an automated CPG, it is needed to have a unique structure for the CPGs. This study aims to propose a unique framework for the Persian guidelines. Materials and methods 20 Persian CPGs were selected and divided into the creation and validation sets (n=10 for each). The first group was studied independently and their headings were listed; wherever possible, the headings were merged into a new heading that was applicable to all the guidelines. The developed framework was validated by the second group of the guidelines. Results Studied guidelines had a very heterogeneous structure. The number of original headings was 249; they were reduced to 14 main headings with 16 subheadings in a unique developed framework. The framework is able to represent and cover 100% of the guidelines. Conclusion The heterogeneity of guidelines was high as they were not developed based on the unique framework. The proposed framework provides a layout for designing the CPGs with a homogeneous structure. Guideline developers can use this framework to develop structured CPGs. This will facilitate the integration of the guidelines into electronic medical records as well as clinical decision support systems.
Chapter
Credit card fraud is a crucial issue that has been faced by cardholder and card issuing companies for decades. Credit card frauds are performed at two levels, application-level frauds and transaction-level frauds. This paper focus on credit cards fraud detection at application level using features selection methods. In this paper, J48 decision tree, AdaBoost, Random Forest, Naive Bayes, and PART machine learning techniques have been used for detection of financial frauds of a credit card and the performance of these techniques are compared on the basis of the five parameters namely sensitivity, specificity, precision, recall, MCC, and accuracy. A German credit dataset is used to evaluate these machines learning techniques efficiency based on filter and wrapper features selection method. The experiment outcomes show that the prediction accuracy of J48 and PART has been increased after applying filter and wrapper methods. Finally, precision and sensitivity of J48, AdaBoost, and the random forest have been enhanced.
Article
The main challenge of feature selection is overcoming the curse of dimensionality. In this paper, a new Bacterial Colony Optimization method with Multi-Dimensional Population, abbreviated as BCO-MDP, is presented for feature selection for the purpose of classification. To address the combinational problem associated with feature selection, the population with multiple dimensionalities is represented by subsets of different feature sizes. The population is grouped in terms of ‘Tribes’. The sizes of the feature subsets within a tribe are equal, while the dimensionalities differ when they belong to different tribes to achieve parallel solutions. The features are identified by their contributions to the most promising solutions in the total population and the classification performances of their tribes. A search is then conducted for the optimal feature subsets with varying dimensionalities. The convergence speed can be enhanced by a variety of exchange strategies within and between tribes. The proposed BCO-MDP method is demonstrated to be superior to the binary algorithms in terms of feature size and efficiency, while having a lower computational complexity in comparison to other population-based algorithms with constant dimensionality.