Conference PaperPDF Available

Deep Convolutional Neural Network Based Covid-19 Classification From Radiology X-Ray Images For IoT Enabled Devices

Authors:

Abstract and Figures

The Coronavirus Disease 2019 (COVID19) epidemic, which erupted at the end of 2019, continued rapidly throughout the nations from Wuhan, China. This highly contagious infectious disease is rapidly spreading among the public. Early research on COVID-19-affected patients has revealed distinctive anomalies in chest radiography images. As a result, it is now necessary to identify various risk factors that can move an infected person from a mild to a serious stage of sickness. In Deep Learning (DL), strategies as a subset of Artificial Intelligence (AI) are used to deal with many real-life glitches. This paper introduces a Deep Convolutional Neural Network (DCNN) to perform multiclass classification for COVID-19, Pneumonia, and Normal Patients from radiological imaging of the chest. Also, the work is implemented with an IoT framework, used for communicating user and DCNN model. This Deep Convolutional Neural Network (DCNN) classification mechanism achieved a perfect test accuracy of 94.95% for COVID-19. The used datasets are acquired from Kaggle and GitHub.
Content may be subject to copyright.
2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS)
Deep Convolutional Neural Network Based
Covid-19 Classification From Radiology X-Ray
Images For IoT Enabled Devices
Yogesh H. Bhosale
Dept. of Computer Science and
Engineering
Birla Institute of Technology (BIT)
Mesra, Ranchi, India.
yogeshbhosale988@gmail.com
Mahendra Nakrani
Dept. of Computer Vision Engineering
WeAgile Software Solution Pvt. Ltd
Pune, India
nakrani.mahender@gmail.com
Shrinivas Zanwar
Dept. of Artificial Intelligence and Data
Science
CSMSS, Chh. Shahu College of
Engineering, Aurangabad, India
shrinivas.zanwar@gmail.com
Devendra Bhuyar
Dept. of Electronics and Computer
Engineering
CSMSS, Chh. Shahu College of
Engineering, Aurangabad, India
devbhuyar@gmail.com
Zakee Ahmed
Dept. of Artificial Intelligence and Data
Science
CSMSS, Chh. Shahu College of
Engineering, Aurangabad, India.
zakee4@gmail.com
Ulhas Shinde
Dept. of Electronics and Telecomm.
Engineering
CSMSS, Chh. Shahu College of
Engineering, Aurangabad, India
devbhuyar@gmail.com
Abstract- The Coronavirus Disease 2019 (COVID19)
epidemic, which erupted at the end of 2019, continued rapidly
throughout the nations from Wuhan, China. This highly
contagious infectious disease is rapidly spreading among the
public. Early research on COVID-19-affected patients has
revealed distinctive anomalies in chest radiography images. As
a result, it is now necessary to identify various risk factors that
can move an infected person from a mild to a serious stage of
sickness. In Deep Learning (DL), strategies as a subset of
Artificial Intelligence (AI) are used to deal with many real-life
glitches. This paper introduces a Deep Convolutional Neural
Network (DCNN) to perform multiclass classification for
COVID-19, Pneumonia, and Normal Patients from radiological
imaging of the chest. Also, the work is implemented with an IoT
framework, used for communicating user and DCNN model.
This Deep Convolutional Neural Network (DCNN) classification
mechanism achieved a perfect test accuracy of 94.95% for
COVID-19. The used datasets are acquired from Kaggle and
GitHub.
Keywords- Deep Learning, Classification, Deep Convolutional
Neural Network, Radiology Images, X-ray, Diagnosis.
I. INTRODUCTION
Nowadays, the notion of the Internet of Things (IoT)
is very much popular in artificial intelligence applications.
So, here we have used IoT-based automation in the deep
learning model. COVID-19 continues to be a major threat
to human health worldwide, with millions estimated to
be affected within a few months after the outbreak and
thousands of deaths [1]. Humans are infected with the
acute respiratory disease coronavirus-2, which causes
COVID-19 (S.A.R.S.-Co.V-2). Effective and accurate
examination of infection is one of the crucial steps in
struggling infection, allowing infected people to start
instant action to cure, also isolating and separating those
infected to prevent the virus from spreading. The most
public diagnostic techniques used to test for infected people
with COVID-19 infection are Antigen Test or Reverse
Transcription Polymerase Chain Reaction (RT-PCR) test [2]
that can detect S.A.R.S-Co.V-2 RNA from
respiratory samples obtained by several
combinations such as nasopharyngeal or oropharyngeal
rods. Although RT-PCR testing is now the yardstick for
COVID-19 infection because of its sensitivity, it is
laborious and demanding. The limited
availability of RT-PCR kits and the need to access a research-
level research center becomes a daunting challenge.
Fig. 1. Chest Radiography Image
Another strategy for RT-PCR testing may be radiography
testing for COVID-19 testing. In the radiographic
examination, experienced radiologists perform and analyze
chest radiography imaging. Radiologists then try to remove
the visual cues to diagnose SARS-CoV-2 virus contamination,
as shown in Fig. 1. Most of the visual cues from radiology
images of the COVID-19 chest showed similar features such
as circular morphology, ground-glass opacities, lung
distribution, and lung integration [3]. Although radiological
chest pictures may assist in the initial diagnosis of doubted
cases, the features of diverse major pneumonia are similar.
Consequently, it is difficult for diagnosticians to detect
COVID-19 from other types of pneumonia. This leads to the
search for a computer-assisted test program (CAD) to assist
the radiologist in translating radiological chest images of
COVID-19 classification accurately and rapidly.
The major contributions are as follows:
A custom DCNN classification model is
recommended that will be employed to distinguish
COVID-19 individuals using X-ray images.
To improve classifier efficiency, different
preprocessing and training methods were used.
The samples in public repositories are small and
skewed. We used multi-control data augmentation to
address this while considering the samples for
all classes.
1398
978-1-6654-0816-5/22/$31.00 ©2022 IEEE
2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS) | 978-1-6654-0816-5/22/$31.00 ©2022 IEEE | DOI: 10.1109/ICACCS54159.2022.9785113
Authorized licensed use limited to: Birla Institute of Technology. Downloaded on June 08,2022 at 05:57:00 UTC from IEEE Xplore. Restrictions apply.
2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS)
This DCNN classification mechanism can help the
radiologist and clinicians to find COVID-19.
The remaining section is arranged as follows. Section II
argues the associated work. Section III offers implemented
work. Results and conclusion are mentioned in sections IV
and V, respectively.
II. RELATED WORK
The need for a CAD program to acquire COVID-19
accurately and rapidly has led to many in-depth AI learning
solutions over the past few months. For instance, in [4], the
region of interest (ROIs) are extracted from the computer
tomography (CT) images by using a modified inception
network. The features are extracted from these ROIs for
classification. The classification was done using ensemble
learning of Adaboost and Decision Tree. In [5], transfer
learning was proposed to extract the features using pre-trained
CNNs such as Resnet, Xception, Inception, Densenet, and
Googlenet. The classification of COVID-19 from the
extracted features was done using a support vector machine
(SVM).
COVID-net, a residual architecture-based CNN, was
proposed in [6] to classify the chest radiography images.
DeTraC, a pre-trained CNN, was proposed in [7] to extract the
deep local features. In this case, a layer of phase drop is
attached and a layer of phase structure to provide final
separation. In [8], CNN's comprehensive study model
consisting of three elements, a spinal network, a split head,
and a discovery head, was proposed. Advanced features are
extracted using a spinal network. These high-level features are
applied to the split head and to the wrong detection head,
which gave the separation points and anomaly points,
respectively. The classification is done by applying the
threshold to the average classification score and anomaly
score.
Ibrahim et al. [9] proposed the recognition of pneumonia
with COVID-19 or non-COVID-19 and bacterial pneumonia
using a DNN. Two binary classes and a multiclass model were
used to train models. These infections are all identified in two
binary categories, as well as healthy CXR images. Emtiaz
Hussain et al. [10] introduced CoroDet, a CNN method for
automatic recognition using X-ray of green chest and scanning
computed tomography scans. CoroDet was created as an
appropriate diagnostic tool for the division of two classes,
three classes, and four categories.
The Support Vector Machines (SVM) separator has been
schooled distinguish features, with multiple kernels functions
like Linear, Quadratic, Cubic, and Gaussian by Aras M.
Ismael et al. [11]. The good rescue process also uses in-depth
CNN-trained advanced models mentioned earlier. This project
proposes a new CNN method for end-on training. Research
performance was measured using category accuracy.
Experimental results suggest that COVID-19 can be obtained
from X-ray images. The in-depth CNN study is proposed in
this paper to separate radiology chest images into Pneumonia,
COVID-19, and standardized images.
III. PROPOSED CLASSIFIER WITH IOT FRAMEWORK
The IoT framework for the classification of COVID19 is
as shown in Figure 2. It describes as patient/doctor will have
a handheld device to capture X-ray images uploaded on server
AWS S3 storage. The web server accepts the uploaded image
after authentication and tests it on the trained model.
The deployed model is already running on the AWS
Cloud9 EC2 environment; the training weights are stored. On
the other hand, testing will give a decision and send it back to
the user application.
Fig. 2. IoT framework
Fig. 3. Classification Model
The classification model is shown in Figure 3, which is
used to model the classification analysis for COVID 19.
Initially, the data set is collected from a variety of sources.
Preprocessing stages are necessary for modifying data;
thresholding and segmentation are done at this stage. Then,
construct a convolutional neural network to determine
whether or not it is COVID-infected.
A. Data set
The two publicly available databases were combined to
train, validate, and test the suggested DCNN. The first data
comes from the RSNA Pneumonia Detection Challenge [12]
of the Kaggle competition. This database contains pneumonia
and non-pneumonia (e.g., General) of 25000 cases of
radiology imaging. Database estimates only 165 pneumonia
images and 170 randomized radiology images were randomly
selected for testing. The process of adding an image is used to
increase the size of the database; the final details of the
database are given in Table 1.
The second database is the COVID-19 [13] database from
GitHub. This database contains 350 COVID-19 radiology
chest images, of which only 220 chest radiology images were
Data Set Preprocessing
Deep
Convolutional
Neural
Network
Classification
1399
Authorized licensed use limited to: Birla Institute of Technology. Downloaded on June 08,2022 at 05:57:00 UTC from IEEE Xplore. Restrictions apply.
2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS)
selected. It is unknown at this time what will do after leaving
the post. This highlights the lack of COVID-19 radiology
images offered in the community domain and the need to
improve the quality of research data needed to manage this
epidemic in the future.
TABLE I. DATABASE DETAILS USED FOR TRAINING AND TESTING
Disease Type Image Samples
Normal 1540
COVID-19 1520
Pneumonia 1560
B. Preprocessing
Since we are using CNN for feature extraction and
classification purpose, there is not necessary to perform more
data correction through preprocessing. Till three stages are
followed here in this phase as the resize, augmentation, and
equalization to increase the number of training images and
segmentation to correct dataset.
1) Resize Data
Resizing is the process of changing the size of an image
without leaving out anything. All chest radiography image
images are resized 256x256 in preprocessing. Improving our
system's performance is needed by reducing training time
[14]. After this step, data is labeled, which is required to train
the neural network.
2) Image Augmentation
It is a method that allows you to improve the number of
training images by intelligently modifying your existing
images. There are several methods in augmentation as image
rotating, shifting, flipping, noising, blurring, etc. [15]. Image
augmentation is a regularization strategy for reducing model
overfitting by producing new training samples from existing
train set. The flipping, noising, and blurring methods are
applied here in the image augmentation process.
3) Histogram Equalization
It is a technique used to enhance brightness by changing
the intensity of the image. It does this by successfully
distributing the most common solidity values, i.e., extending
the image's width [16].
C. Deep Convolutional Neural Network (DCNN)
We suggested 2-dimensional DCNN for COVID-19
classification from X-ray based with residual network, as
shown in Fig 4.
The proposed DCNN has five Convolutional blocks
passed through the residual connections. Each convolutional
block contains three batch normalization layers that comprise
two convolution layers, as shown in Fig 5. Filter numbers 16,
32, 64, 128, and 256 are used for each convolution layer. The
residual connection has one layer of convolution with the
same size and number of filters in the convolution block to be
added to the add-on layer. The max-pooling layer is
introduced after the additional layer, reducing the range of the
features.
The Rectified Linear Unit (ReLu) function is applied on
convolution layers, as described in Eq. 1. The formation of a
deep neural convolution network model has different layers,
as exposed in Fig. 3. The first layer is the insertion layer, with
a size of (256, 256, 3). Softmax[17] function, as in Eq. 2, is
used to activate the final layer, i.e., dense layer, and provides
three possibilities for each class, namely COVID-19,
Pneumonia, and Normal.
Fig. 4. Deep Convolutional Neural Network Model
Fig. 5. N- Filter Convolution Blocks.
The dataset is balanced by considering 220 normal images
and 210 pneumonia images selected randomly for the
execution. The image augmentation technique is used to
From Previous Layer
Batch Normalization
Layer
Convolution Layer @ N
Batch Normalization
Layer
Convolution Layer @ N
Batch Normalization
Layer
To Next Layer
1400
Authorized licensed use limited to: Birla Institute of Technology. Downloaded on June 08,2022 at 05:57:00 UTC from IEEE Xplore. Restrictions apply.
2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS)
increase the dataset(section III.B.2), and the final dataset
details are given in Table I.
= max(0,  ) (1)
()=

   = 1   (2)
IV. RESULTS AND DISCUSSION
The experimentation is executed on Google Colaboratory
(colab) environment based on Python with access to RAM of
25 GB and Tesla K80 GPUs. The python library packages like
Pandas, NumPy, Matplotlib, Sci-Kit learns, etc along with AI
frameworks like TensorFlow or Keras. The planned system is
implemented using the Keras framework.
(a)
(b)
(c)
Fig. 6. Sample radiology images. (a) Normal, (b) COVID- 19, (c) Pneumonia.
The database is divided into training, validation, and
testing at a scale of 70:15:15 for this Convolution Neural
Network model. A training learning rate of 0.001 with an
adaptive algorithm was used to train a model for 25 epochs.
The loose function used was cross-entropy of the phase with
precision as a metric. Fig. 6 shows radiology images of a
patient with standard COVID-19 or Pneumonia.
The planned model performance is measured with
sensitivity, specificity, and recognition rate, as observed in
Table II. Along with that, the confusion matrix and ROC
curves are also drawn.
TABLE II. SPECIFICITY, SENSITIVITY, AND RECOGNITION RATE FOR EACH
DISEASE TYPE
Infection
Type
Specificity
(%)
Sensitivity
(%)
Recognition
Rate (%)
Normal 97.17 94.48 95.55
COVID-19 96.88 94.95 94.94
Pneumonia 98.18 95.17 96.59
Fig. 7. Confusion matrix for proposed DCNN model.
The confusion matrix(CM) of the projected deep
convolutional neural network classifier on the test dataset is
shown in Fig. 7. This CM describes the evaluation of this
DCNN classification model. It gives actual values of true
positive(TP), false negative(FN), true negative(TN), and
false-positive(FP). The columns are treated as predicted
values, rows are actual values, and diagonal elements values
are correctly predicted values. The values (0.94, 0.96, 0.95)
are observed as DCNN prediction.
The Receiver Operating Characteristic Curve(ROC) is as
publicized in Fig. 8, which explains the performance of
combined confusion matrices. The graph is plotted by
considering the false positive and true positive rates.
Fig. 8. ROC Curve for proposed DCNN model.
As we go through comparative analysis with existing
work, Fig. 9 represents the proposed convolution neural
network gives better results. The comparison is made with the
parameters like sensitivity, specificity, and accuracy.
Fig. 9. Comparative analysis with existing work.
V. CONCLUSIONS
The present research work of the DCNN for identifying
COVID-19 and pneumonia from radiology chest imaging is
tested and found useful with better accuracy. The model is
encouraged by the residual network and achieved 94.94% and
96.59% accuracy for COVID-19 and Pneumonia,
respectively, to a considerable amount of database. This
model can also be enhanced using a large database of infected
COVID-
19
Pneumo
nia Normal COVID-
19
Pneumo
nia Normal
Proposed CNN Model Existing ResNet50V2
Models [18]
Sensitivity 94.89 95.17 94.48 74.02 85.54 92.6
Specificity 96.88 98.18 97.28 97.33 92.98 86.64
Accuracy 94.04 96.59 95.55 97.26 98.39 94.85
0
20
40
60
80
100
120
Sensitivity Specificity Accuracy
1401
Authorized licensed use limited to: Birla Institute of Technology. Downloaded on June 08,2022 at 05:57:00 UTC from IEEE Xplore. Restrictions apply.
2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS)
or non-infected cases. The proposed method inspires further
investigation of the developed deep convolutional neural
network model with a larger database for better accuracy in
diagnosis. The use of the IoT has reduced the time of getting
diagnosis reports at the user side with the effective use of the
internet and modern computing technologies.
REFERENCES
[1] Coronavirus disease (COVID-19) Pandemic,
https://www.who.int/emergencies/diseases/novel-coronavirus-2019,
accessed on 25-April-2020.
[2] Wang et al. Detection of SARS-CoV-2 in different types of clinical
specimens. JAMA, 2020.
[3] Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, et al. Clinical features
of patients infected with 2019 novel coronavirus in Wuhan, China.
Lancet 2020.
[4] Shuai Wang, Bo Kang, Jinlu Ma, Xianjun Zeng, Mingming Xiao, Jia
Guo, Mengjiao Cai, Jingyi Yang, Yaodong Li, Xiangfei Meng, Bo Xu,
"A deep learning algorithm using CT images to screen for Corona
Virus Disease (COVID-19)", medRxiv 2020.02.14.20023028, 24 April
2020.
[5] Sethy, P.K, Behera, S.K, “Detection of Coronavirus Disease (COVID-
19) Based on Deep Features”, Preprints 2020, 2020030300 (doi:
10.20944/preprints202003.0300.v1).
[6] Wang L, Wong A, "COVID-Net: A Tailored Deep Convolutional
Neural Network Design for Detection of COVID-19 Cases from Chest
Radiography Images", arXiv:2003.09871v2 [eess.IV], 30 Mar 2020.
[7] Asmaa Abbas, Mohammed M. Abdelsamea, Mohamed Medhat Gaber,
"Classification of COVID-19 in chest X-ray images using DeTraC
deep convolutional neural network", arXiv:2003.13815v2 [eess.IV] 18
Apr 2020.
[8] Jianpeng Zhang, Yutong Xie, Yi Li, Chunhua Shen, and Yong Xia,
"COVID-19 Screening on Chest X-ray Images Using Deep Learning-
based Anomaly Detection", arXiv:2003.12338v1 [eess.IV] 27 Mar
2020.
[9] Ibrahim, A.U., Ozsoz, M., Serte, S. et al. Pneumonia Classification
Using Deep Learning from Chest X-ray Images During COVID-
19. Cogn Comput (2021). https://doi.org/10.1007/s12559-020-09787-
5
[10] Hussain, Emtiaz, Mahmudul Hasan, Md Anisur Rahman, Ickjai Lee,
Tasmi Tamanna, and Mohammad Zavid Parvez. "CoroDet: A deep
learning based classification for COVID-19 detection using chest X-
ray images." Chaos, Solitons & Fractals 142 (2021): 110495.
[11] Ismael, Aras M., and Abdulkadir Şengür. "Deep learning approaches
for COVID-19 detection based on chest X-ray images." Expert
Systems with Applications 164 (2021): 114054
[12] Radiological Society of North America. RSNA pneumonia detection
challenge. https://www.kaggle.com/c/rsnapneumonia-detection-
challenge/data, 2019.
[13] Cohen et al. COVID-19 image data collection.
https://github.com/ieee8023/covid-chestxray-dataset, 2020.
[14] Haralick, Robert M., and Linda G. Shapiro. "Image segmentation
techniques." Computer vision, graphics, and image processing 29, no.
1 (1985): 100-132.
[15] Perez, Luis, and Jason Wang. "The effectiveness of data augmentation
in image classification using deep learning." arXiv preprint
arXiv:1712.04621 (2017).
[16] Togacar, Mesut, Burhan Ergen, and Zafer Cömert. "COVID-19
detection using deep learning models to exploit Social Mimic
Optimization and structured chest X-ray images using fuzzy color and
stacking approaches." Computers in biology and medicine 121 (2020):
103805.
[17] Alazab, Moutaz, Albara Awajan, Abdelwadood Mesleh, Ajith
Abraham, Vansh Jatana, and Salah Alhyari. "COVID-19 prediction and
detection using deep learning." International Journal of Computer
Information Systems and Industrial Management Applications 12
(2020): 168-181.
[18] Md. Rahimzadeh, Abolfazl Attar. A modified deep convolutional
neural network for detecting COVID-19 and pneumonia from chest X-
ray images based on the ResNet50V2. Informatics in Medicine.
1402
Authorized licensed use limited to: Birla Institute of Technology. Downloaded on June 08,2022 at 05:57:00 UTC from IEEE Xplore. Restrictions apply.
... One of the studies to critically assess the transition from these traditional techniques to CNNs. Their study underlined the inherent limitations of CAD systems, especially their reliance on manually crafted features [16]. This manual dependence often led to inconsistencies, largely influenced by the experience of the technician, the quality of equipment, and the inherent variability of medical images. ...
... Parameter optimization of the proposed methods can be found in the "Sensitivity Analysis" section. There are many other methods to reduce model overfitting such as 10-fold cross validation [53], early stopping criteria [54], and image augmentation by generating new training samples from the existing training dataset [55]. ...
Article
Full-text available
Feature selection (FS) is a crucial area of cognitive computation that demands further studies. It has recently received a lot of attention from researchers working in machine learning and data mining. It is broadly employed in many different applications. Many enhanced strategies have been created for FS methods in cognitive computation to boost the performance of the methods. The goal of this paper is to present three adaptive versions of the capuchin search algorithm (CSA) that each features a better search ability than the parent CSA. These versions are used to select optimal feature subset based on a binary version of each adapted one and the k-Nearest Neighbor (k-NN) classifier. These versions were matured by applying several strategies, including automated control of inertia weight, acceleration coefficients, and other computational factors, to ameliorate search potency and convergence speed of CSA. In the velocity model of CSA, some growth computational functions, known as exponential, power, and S-shaped functions, were adopted to evolve three versions of CSA, referred to as exponential CSA (ECSA), power CSA (PCSA), and S-shaped CSA (SCSA), respectively. The results of the proposed FS methods on 24 benchmark datasets with different dimensions from various repositories were compared with other k-NN based FS methods from the literature. The results revealed that the proposed methods significantly outperformed the performance of CSA and other well-established FS methods in several relevant criteria. In particular, among the 24 datasets considered, the proposed binary ECSA, which yielded the best overall results among all other proposed versions, is able to excel the others in 18 datasets in terms of classification accuracy, 13 datasets in terms of specificity, 10 datasets in terms of sensitivity, and 14 datasets in terms of fitness values. Simply put, the results on 15, 9, and 5 datasets out of the 24 datasets studied showed that the performance levels of the binary ECSA, PCSA, and SCSA are over 90% in respect of specificity, sensitivity, and accuracy measures, respectively. The thorough results via different comparisons divulge the efficiency of the proposed methods in widening the classification accuracy compared to other methods, ensuring the ability of the proposed methods in exploring the feature space and selecting the most useful features for classification studies.
... Smart medical systems have transformed the traditional medical process by leveraging IoT technology to automate various tasks, reducing waiting times, and improving efficiency. The implementation of smart medical systems has not only made healthcare more accessible and convenient for patients but has also improved collaboration among medical professionals, leading to better healthcare outcomes (Bhosale, Zanwar, and Ahmed 2022;Demertzis et al. 2021). ...
Article
Full-text available
In response to the shortage of high-quality medical resources, long waiting times, and difficulties finding the right doctor, a smart medical system design scheme is proposed. This system is built on a cloud platform to increase capacity and improve operating speed. It offers intelligent triage, disease diagnosis, intelligent question-and-answer, online doctor diagnosis, online registration, patient-patient communication, doctor-patient communication, online payment, map navigation, and other functions. By using machine learning and deep learning methods, the system analyzes the patient’s disease description, laboratory test sheets, and other medical image materials to diagnose the disease suffered by the patient. The matching degree of the doctor and the patient is calculated based on the doctor’s expertise, patient evaluation, and doctor satisfaction, and doctors are arranged for patients to choose. The system also integrates the diagnosis results made by multiple doctors to help patients understand their illnesses conveniently. Additionally, an intelligent question-and-answer module analyzes the patient’s inquiry intention and gives feedback to the patient’s question. The system’s feasibility and practicability have been demonstrated by experts in the fields of medical treatment, computer, and economic management. Overall, this system has high application value and can effectively solve the problems that exist in the current medical field.
Article
Full-text available
Around the world, lung disease is a prevalent cause of death and illness. In this article, we propose a lung disease detection system for the automated identification of critical lung diseases: 1. Tuberculosis 2. Regular Pneumonia 3. COVID Pneumonia and healthy individuals from chest X-ray (CXR) images. The proposed system incorporates a two-stage classification method using an ensemble of convolutional neural networks (CNNs). We evaluated the performance of the proposed system using various evaluation metrics with a test set containing 200 normal, 100 Tuberculosis, 200 Regular Pneumonia, and 100 COVID Pneumonia (a total of 600) CXR images. The proposed system achieves an average macro precision, recall, and F1-score of 0.99 and an accuracy of 0.99. The performance of the proposed system based on F1-Score (0.99) for three classes, namely Tuberculosis, Pneumonia, and COVID, is better than the following existing classification frameworks: ensemble of K-ResNets for the same three classes (F1-Score: 0.84), modified ResNet50 (F1-Score:0.89), KNN classifier with fractional multi-channel exponent moments (F1-Score: 0.89), VGG-16 (F1-Score: 0.89), and convolutional sparse support estimator network (F1-Score for COVID: 0.78). The proposed system can also determine the confidence associated with the challenging classification of Regular Vs. COVID Pneumonia. We also utilized visualization techniques like Grad-CAM and Saliency maps to analyze the features learned by the CNNs, to create transparency in the prediction.
Article
Full-text available
The COVID-19 pandemic wreaks havoc on healthcare systems all across the world. In pandemic scenarios like COVID-19, the applicability of diagnostic modalities is crucial in medical diagnosis, where non-invasive ultrasound imaging has the potential to be a useful biomarker. This research develops a computer-assisted intelligent methodology for ultrasound lung image classification by utilizing a fuzzy pooling-based convolutional neural network FP-CNN with underlying evidence of particular decisions. The fuzzy-pooling method finds better representative features for ultrasound image classification. The FPCNN model categorizes ultrasound images into one of three classes: covid, disease-free (normal), and pneumonia. Explanations of diagnostic decisions are crucial to ensure the fairness of an intelligent system. This research has used Shapley Additive Explanation (SHAP) to explain the prediction of the FP-CNN models. The prediction of the black-box model is illustrated using the SHAP explanation of the intermediate layers of the black-box model. To determine the most effective model, we have tested different state-of-the-art convolutional neural network architectures with various training strategies, including fine-tuned models, single-layer fuzzy pooling models, and fuzzy pooling at all pooling layers. Among different architectures, the Xception model with all pooling layers having fuzzy pooling achieves the best classification results of 97.2% accuracy. We hope our proposed method will be helpful for the clinical diagnosis of covid-19 from lung ultrasound (LUS) images.
Article
Full-text available
Lung cancer causes millions of deaths annually, and CT scans and lung X-rays are common diagnostic tools. However, large datasets can contain noisy and irrelevant features that can affect the effectiveness of deep learning classification systems. Feature selection is a preprocessing step that can minimize database dimensionality and improve classification accuracy by selecting essential features. In addition, to solve the disadvantage of expensive solution set analysis in the wrapper technique, this study offers an efficient evaluation methodology that significantly decreases the evaluation cost and enhances the efficacy of the feature selection method. Hence, the study proposed a Hybrid Spotted Hyena Optimization with Seagull Algorithm to solve feature selection problems, which efficiently obtained the optimum subset with the largest number of relevant attributes. Biomedical lung data are collected from the LIDC/IDRI and chest X-ray datasets, and the bicubic-interpolation approach is employed to eliminate noise present in the data. We used generative modeling technique, DCGAN, to perform data augmentation. The chosen lung characteristics are evaluated using a hybrid CNN-LSTM, which identified normal and abnormal features in biomedical lung data. The system's efficacy is determined by examining its accuracy, precision, recall, specificity, and sensitivity using a Python experimental design. In the LIDC/IDRI database, the proposed classifier achieved a sensitivity of 99.8%, specificity of 99.3%, precision of 99.14%, and accuracy of 99.6%. In the chest X-ray dataset, the classifier achieved a sensitivity of 99.62%, specificity of 97.8%, precision of 97.5%, and accuracy of 99.7%.
Preprint
Full-text available
Background: Natural Language Processing (NLP) is widely used to extract clinical insights from Electronic Health Records (EHRs). However, the lack of annotated data, automated tools, and other challenges hinder the full utilisation of NLP for EHRs. Various Machine Learning (ML), Deep Learning (DL) and NLP techniques are studied and compared to understand the limitations and opportunities in this space comprehensively. Methodology: After screening 261 articles from 11 databases, we included 127 papers for full-text review covering seven categories of articles: 1) medical note classification, 2) clinical entity recognition, 3) text summarisation, 4) deep learning (DL) and transfer learning architecture, 5) information extraction, 6) Medical language translation and 7) other NLP applications. This study follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Result and Discussion: EHR was the most commonly used data type among the selected articles, and the datasets were primarily unstructured. Various ML and DL methods were used, with prediction or classification being the most common application of ML or DL. The most common use cases were: the International Classification of Diseases, Ninth Revision (ICD-9) classification, clinical note analysis, and named entity recognition (NER) for clinical descriptions and research on psychiatric disorders. Conclusion: We find that the adopted ML models were not adequately assessed. In addition, the data imbalance problem is quite important, yet we must find techniques to address this underlining problem. Future studies should address key limitations in studies, primarily identifying Lupus Nephritis, Suicide Attempts, perinatal self-harmed and ICD-9 classification.
Article
Full-text available
Image captioning is a pretty modern area of the convergence of computer vision and natural language processing and is widely used in a range of applications such as multi-modal search, robotics, security, remote sensing, medical, and visual aid. The image captioning techniques have witnessed a paradigm shift from classical machine-learning-based approaches to the most contemporary deep learning-based techniques. We present an in-depth investigation of image captioning methodologies in this survey using our proposed taxonomy. Furthermore, the study investigates several eras of image captioning advancements, including template-based, retrieval-based, and encoder-decoder-based models. We also explore captioning in languages other than English. A thorough investigation of benchmark image captioning datasets and assessment measures is also discussed. The effectiveness of real-time image captioning is a severe barrier that prevents its use in sensitive applications such as visual aid, security, and medicine. Another observation from our research is the scarcity of personalized domain datasets that limits its adoption into more advanced issues. Despite influential contributions from several academics, further efforts are required to construct substantially robust and reliable image captioning models.
Article
Background: Convolutional Neural Networks (CNNs) and the hybrid models of CNNs and Vision Transformers (VITs) are the recent mainstream methods for COVID-19 medical image diagnosis. However, pure CNNs lack global modeling ability, and the hybrid models of CNNs and VITs have problems such as large parameters and computational complexity. These models are difficult to be used effectively for medical diagnosis in just-in-time applications. Methods: Therefore, a lightweight medical diagnosis network CTMLP based on convolutions and multi-layer perceptrons (MLPs) is proposed for the diagnosis of COVID-19. The previous self-supervised algorithms are based on CNNs and VITs, and the effectiveness of such algorithms for MLPs is not yet known. At the same time, due to the lack of ImageNet-scale datasets in the medical image domain for model pre-training. So, a pre-training scheme TL-DeCo based on transfer learning and self-supervised learning was constructed. In addition, TL-DeCo is too tedious and resource-consuming to build a new model each time. Therefore, a guided self-supervised pre-training scheme was constructed for the new lightweight model pre-training. Results: The proposed CTMLP achieves an accuracy of 97.51%, an f1-score of 97.43%, and a recall of 98.91% without pre-training, even with only 48% of the number of ResNet50 parameters. Furthermore, the proposed guided self-supervised learning scheme can improve the baseline of simple self-supervised learning by 1%-1.27%. Conclusion: The final results show that the proposed CTMLP can replace CNNs or Transformers for a more efficient diagnosis of COVID-19. In addition, the additional pre-training framework was developed to make it more promising in clinical practice.
Article
Full-text available
Objective The outbreak of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) has caused more than 26 million cases of Corona virus disease (COVID-19) in the world so far. To control the spread of the disease, screening large numbers of suspected cases for appropriate quarantine and treatment are a priority. Pathogenic laboratory testing is typically the gold standard, but it bears the burden of significant false negativity, adding to the urgent need of alternative diagnostic methods to combat the disease. Based on COVID-19 radiographic changes in CT images, this study hypothesized that artificial intelligence methods might be able to extract specific graphical features of COVID-19 and provide a clinical diagnosis ahead of the pathogenic test, thus saving critical time for disease control. Methods We collected 1065 CT images of pathogen-confirmed COVID-19 cases along with those previously diagnosed with typical viral pneumonia. We modified the inception transfer-learning model to establish the algorithm, followed by internal and external validation. Results The internal validation achieved a total accuracy of 89.5% with a specificity of 0.88 and sensitivity of 0.87. The external testing dataset showed a total accuracy of 79.3% with a specificity of 0.83 and sensitivity of 0.67. In addition, in 54 COVID-19 images, the first two nucleic acid test results were negative, and 46 were predicted as COVID-19 positive by the algorithm, with an accuracy of 85.2%. Conclusion These results demonstrate the proof-of-principle for using artificial intelligence to extract radiological features for timely and accurate COVID-19 diagnosis. Key Points • The study evaluated the diagnostic performance of a deep learning algorithm using CT images to screen for COVID-19 during the influenza season. • As a screening method, our model achieved a relatively high sensitivity on internal and external CT image datasets. • The model was used to distinguish between COVID-19 and other typical viral pneumonia, both of which have quite similar radiologic characteristics.
Article
Full-text available
The outbreak of the novel corona virus disease (COVID-19) in December 2019 has led to global crisis around the world. The disease was declared pandemic by World Health Organization (WHO) on 11th of March 2020. Currently, the outbreak has affected more than 200 countries with more than 37 million confirmed cases and more than 1 million death tolls as of 10 October 2020. Reverse-transcription polymerase chain reaction (RT-PCR) is the standard method for detection of COVID-19 disease, but it has many challenges such as false positives, low sensitivity, expensive, and requires experts to conduct the test. As the number of cases continue to grow, there is a high need for developing a rapid screening method that is accurate, fast, and cheap. Chest X-ray (CXR) scan images can be considered as an alternative or a confirmatory approach as they are fast to obtain and easily accessible. Though the literature reports a number of approaches to classify CXR images and detect the COVID-19 infections, the majority of these approaches can only recognize two classes (e.g., COVID-19 vs. normal). However, there is a need for well-developed models that can classify a wider range of CXR images belonging to the COVID-19 class itself such as the bacterial pneumonia, the non-COVID-19 viral pneumonia, and the normal CXR scans. The current work proposes the use of a deep learning approach based on pretrained AlexNet model for the classification of COVID-19, non-COVID-19 viral pneumonia, bacterial pneumonia, and normal CXR scans obtained from different public databases. The model was trained to perform two-way classification (i.e., COVID-19 vs. normal, bacterial pneumonia vs. normal, non-COVID-19 viral pneumonia vs. normal, and COVID-19 vs. bacterial pneumonia), three-way classification (i.e., COVID-19 vs. bacterial pneumonia vs. normal), and four-way classification (i.e., COVID-19 vs. bacterial pneumonia vs. non-COVID-19 viral pneumonia vs. normal). For non-COVID-19 viral pneumonia and normal (healthy) CXR images, the proposed model achieved 94.43% accuracy, 98.19% sensitivity, and 95.78% specificity. For bacterial pneumonia and normal CXR images, the model achieved 91.43% accuracy, 91.94% sensitivity, and 100% specificity. For COVID-19 pneumonia and normal CXR images, the model achieved 99.16% accuracy, 97.44% sensitivity, and 100% specificity. For classification CXR images of COVID-19 pneumonia and non-COVID-19 viral pneumonia, the model achieved 99.62% accuracy, 90.63% sensitivity, and 99.89% specificity. For the three-way classification, the model achieved 94.00% accuracy, 91.30% sensitivity, and 84.78%. Finally, for the four-way classification, the model achieved an accuracy of 93.42%, sensitivity of 89.18%, and specificity of 98.92%.
Article
Full-text available
The Coronavirus Disease 2019 (COVID-19) pandemic continues to have a devastating effect on the health and well-being of the global population. A critical step in the fight against COVID-19 is effective screening of infected patients, with one of the key screening approaches being radiology examination using chest radiography. It was found in early studies that patients present abnormalities in chest radiography images that are characteristic of those infected with COVID-19. Motivated by this and inspired by the open source efforts of the research community, in this study we introduce COVID-Net, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images that is open source and available to the general public. To the best of the authors’ knowledge, COVID-Net is one of the first open source network designs for COVID-19 detection from CXR images at the time of initial release. We also introduce COVIDx, an open access benchmark dataset that we generated comprising of 13,975 CXR images across 13,870 patient patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors’ knowledge. Furthermore, we investigate how COVID-Net makes predictions using an explainability method in an attempt to not only gain deeper insights into critical factors associated with COVID cases, which can aid clinicians in improved screening, but also audit COVID-Net in a responsible and transparent manner to validate that it is making decisions based on relevant information from the CXR images. By no means a production-ready solution, the hope is that the open access COVID-Net, along with the description on constructing the open source COVIDx dataset, will be leveraged and build upon by both researchers and citizen data scientists alike to accelerate the development of highly accurate yet practical deep learning solutions for detecting COVID-19 cases and accelerate treatment of those who need it the most.
Article
Full-text available
COVID-19 is a novel virus that causes infection in both the upper respiratory tract and the lungs. The numbers of cases and deaths have increased on a daily basis on the scale of a global pandemic. Chest X-ray images have proven useful for monitoring various lung diseases and have recently been used to monitor the COVID-19 disease. In this paper, deep-learning-based approaches, namely deep feature extraction, fine-tuning of pretrained convolutional neural networks (CNN), and end-to-end training of a developed CNN model, have been used in order to classify COVID-19 and normal (healthy) chest X-ray images. For deep feature extraction, pretrained deep CNN models (ResNet18, ResNet50, ResNet101, VGG16, and VGG19) were used. For classification of the deep features, the Support Vector Machines (SVM) classifier was used with various kernel functions, namely Linear, Quadratic, Cubic, and Gaussian. The aforementioned pretrained deep CNN models were also used for the fine-tuning procedure. A new CNN model is proposed in this study with end-to-end training. A dataset containing 180 COVID-19 and 200 normal (healthy) chest X-ray images was used in the study's experimentation. Classification accuracy was used as the performance measurement of the study. The experimental works reveal that deep learning shows potential in the detection of COVID-19 based on chest X-ray images. The deep features extracted from the ResNet50 model and SVM classifier with the Linear kernel function produced a 94.7% accuracy score, which was the highest among all the obtained results. The achievement of the fine-tuned ResNet50 model was found to be 92.6%, whilst end-to-end training of the developed CNN model produced a 91.6% result. Various local texture descriptors and SVM classifications were also used for performance comparison with alternative deep approaches; the results of which showed the deep approaches to be quite efficient when compared to the local texture descriptors in the detection of COVID-19 based on chest X-ray images.
Article
Full-text available
Chest X-ray is the first imaging technique that plays an important role in the diagnosis of COVID-19 disease. Due to the high availability of large-scale annotated image datasets, great success has been achieved using convolutional neural networks (CNN s) for image recognition and classification. However, due to the limited availability of annotated medical images, the classification of medical images remains the biggest challenge in medical diagnosis. Thanks to transfer learning, an effective mechanism that can provide a promising solution by transferring knowledge from generic object recognition tasks to domain-specific tasks. In this paper, we validate and a deep CNN, called Decompose, Transfer, and Compose (DeTraC), for the classification of COVID-19 chest X-ray images. DeTraC can deal with any irregularities in the image dataset by investigating its class boundaries using a class decomposition mechanism. The experimental results showed the capability of DeTraC in the detection of COVID-19 cases from a comprehensive image dataset collected from several hospitals around the world. High accuracy of 93.1% (with a sensitivity of 100%) was achieved by DeTraC in the detection of COVID-19 X-ray images from normal, and severe acute respiratory syndrome cases.
Article
Full-text available
Currently, the detection of coronavirus disease 2019 (COVID-19) is one of the main challenges in the world, given the rapid spread of the disease. Recent statistics indicate that the number of people diagnosed with COVID-19 is increasing exponentially, with more than 1.6 million confirmed cases; the disease is spreading to many countries across the world. In this study, we analyse the incidence of COVID-19 distribution across the world. We present an artificial-intelligence technique based on a deep convolutional neural network (CNN) to detect COVID-19 patients using real-world datasets. Our system examines chest X-ray images to identify such patients. Our findings indicate that such an analysis is valuable in COVID-19 diagnosis as X-rays are conveniently available quickly and at low costs. Empirical findings obtained from 1000 X-ray images of real patients confirmed that our proposed system is useful in detecting COVID-19 and achieves an F-measure range of 95-99%. Additionally, three forecasting methods-the prophet algorithm (PA), autoregressive integrated moving average (ARIMA) model, and long short-term memory neural network (LSTM)-were adopted to predict the numbers of COVID-19 confirmations, recoveries, and deaths over the next 7 days. The prediction results exhibit promising performance and offer an average accuracy of 94.80% and 88.43% in Australia and Jordan, respectively. Our proposed system can significantly help identify the most infected cities, and it has revealed that coastal areas are heavily impacted by the COVID-19 spread as the number of cases is significantly higher in those areas than in non-coastal areas.
Article
Full-text available
In this paper, we have trained several deep convolutional networks with introduced training techniques for classifying X-ray images into three classes: normal, pneumonia, and COVID-19, based on two open-source datasets. Our data contains 180 X-ray images that belong to persons infected with COVID-19, and we attempted to apply methods to achieve the best possible results. In this research, we introduce some training techniques that help the network learn better when we have an unbalanced dataset (fewer cases of COVID-19 along with more cases from other classes). We also propose a neural network that is a concatenation of the Xception and ResNet50V2 networks. This network achieved the best accuracy by utilizing multiple features extracted by two robust networks. For evaluating our network, we have tested it on 11302 images to report the actual accuracy achievable in real circumstances. The average accuracy of the proposed network for detecting COVID-19 cases is 99.50%, and the overall average accuracy for all classes is 91.4%.
Article
Background and Objective The Coronavirus 2019, or shortly COVID-19, is a viral disease that causes serious pneumonia and impacts our different body parts from mild to severe depending on patient’s immune system. This infection was first reported in Wuhan city of China in December 2019, and afterward, it became a global pandemic spreading rapidly around the world. As the virus spreads through human to human contact, it has affected our lives in a devastating way, including the vigorous pressure on the public health system, the world economy, education sector, workplaces, and shopping malls. Preventing viral spreading requires early detection of positive cases and to treat infected patients as quickly as possible. The need for COVID-19 testing kits has increased, and many of the developing countries in the world are facing a shortage of testing kits as new cases are increasing day by day. In this situation, the recent research using radiology imaging (such as X-ray and CT scan) techniques can be proven helpful to detect COVID-19 as X-ray and CT scan images provide important information about the disease caused by COVID-19 virus. The latest data mining and machine learning techniques such as Convolutional Neural Network (CNN) can be applied along with X-ray and CT scan images of the lungs for the accurate and rapid detection of the disease, assisting in mitigating the problem of scarcity of testing kits. Methods Hence a novel CNN model called CoroDet for automatic detection of COVID-19 by using raw chest X-ray and CT scan images have been proposed in this study. CoroDet is developed to serve as an accurate diagnostics for 2 class classification (COVID and Normal), 3 class classification (COVID, Normal, and non-COVID pneumonia), and 4 class classification (COVID, Normal, non-COVID viral pneumonia, and non-COVID bacterial pneumonia). Results The performance of our proposed model was compared with ten existing techniques for COVID detection in terms of accuracy. A classification accuracy of 99.1% for 2 class classification, 94.2% for 3 class classification, and 91.2% for 4 class classification was produced by our proposed model, which is obviously better than the state-of-the-art-methods used for COVID-19 detection to the best of our knowledge. Moreover, the dataset with x-ray images that we prepared for the evaluation of our method is the largest datasets for COVID detection as far as our knowledge goes. Conclusion The experimental results of our proposed method CoroDet indicate the superiority of CoroDet over the existing state-of-the-art-methods. CoroDet may assist clinicians in making appropriate decisions for COVID-19 detection and may also mitigate the problem of scarcity of testing kits.
Article
Coronavirus causes a wide variety of respiratory infections and it is an RNA-type virus that can infect both humans and animal species. It often causes pneumonia in humans. Artificial intelligence models have been helpful for successful analyses in the biomedical field. In this study, Coronavirus was detected using a deep learning model, which is a sub-branch of artificial intelligence. Our dataset consists of three classes namely: coronavirus, pneumonia, and normal X-ray imagery. In this study, the data classes were restructured using the Fuzzy Color technique as a preprocessing step and the images that were structured with the original images were stacked. In the next step, the stacked dataset was trained with deep learning models (MobileNetV2, SqueezeNet) and the feature sets obtained by the models were processed using the Social Mimic optimization method. Thereafter, efficient features were combined and classified using Support Vector Machines (SVM). The overall classification rate obtained with the proposed approach was 99.27%. With the proposed approach in this study, it is evident that the model can efficiently contribute to the detection of COVID-19 disease.
Article
An epidemic of respiratory disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) began in China and has spread to other countries.¹ Real-time reverse transcriptase–polymerase chain reaction (rRT-PCR) of nasopharyngeal swabs typically has been used to confirm the clinical diagnosis.² However, whether the virus can be detected in specimens from other sites, and therefore potentially transmitted in other ways than by respiratory droplets, is unknown.