ArticlePDF Available

Wound Care: Wound Management System

Authors:

Abstract and Figures

Wound care is a critical aspect of healthcare that involves treating and managing various types of wounds, typically caused by injuries, surgery, or chronic diseases such as diabetes. Chronic wounds can be particularly challenging to manage and often require 3 to 6 months of long-term care. In a few instances, healing durations are highly unpredictable and can vary depending on the severity of the wound, the patient’s overall health, and other factors such as medication, nutrition, age, comorbidity, environment, etiology, and immune system function. A chronic wound can significantly impact the quality of life, causing pain, discomfort, limited mobility, higher healthcare cost, and even mortality in severe cases. Effective wound care is crucial for promoting complete and timely healing and reducing the risk of complications that may lead to amputation, infection, and other potentially life-threatening outcomes. This work aims to develop a system that automizes to determine the wound boundaries leveraging the DeepLabV3+SE, measures the wound characteristics such as size and area and wound shape using a pipeline of morphological operations and connected component analysis modules. The proposed system’s performance was evaluated using the publicly available dataset. Results demonstrate that the DeepLabV3+SE has outperformed with significantly high dice and IOU scores of 0.923 and 0.924, respectively, compared with several state-of-the-art methods.
Content may be subject to copyright.
Date of publication xxxx 12, 2022, date of current version xxxx 12, 2022.
Digital Object Identifier 10.1109/ACCESS.2022.DOI
Wound Care: Wound Management
System
SHREYAMSHA KUMAR B. K.1, (Member, IEEE), ANANDAKRISHAN KC1, MANISH SUMANT2,
and SRINIVASAN JAYARAMAN3,(Member, IEEE)
1TCS Research,Digital Medicine and Medical Technology Unit, Business Transformation Group, TATA Consultancy Services, Bangalore, India
2Digital Medicine and Medical Technology Unit, Business Transformation Group, TATA Consultancy Services, Cincinnati, OH, USA
3TCS Research,Digital Medicine and Medical Technology Unit, Business Transformation Group, TATA Consultancy Services, Cincinnati, OH, USA
Corresponding author: Srinivasan Jayaraman (e-mail: srinivasa.j@tcs.com).
This work was supported by TATA Consultancy Services (TCS). The funder had no role in the decision to submit the work for publication
and the views expressed herein are authors only.
ABSTRACT Wound care is a critical aspect of healthcare that involves treating and managing various types
of wounds, typically caused by injuries, surgery, or chronic diseases such as diabetes. Chronic wounds can
be particularly challenging to manage and often require 3 to 6 months of long-term care. In a few instances,
healing durations are highly unpredictable and can vary depending on the severity of the wound, the patient’s
overall health, and other factors such as medication, nutrition, age, comorbidity, environment, etiology,
and immune system function. A chronic wound can significantly impact the quality of life, causing pain,
discomfort, limited mobility, higher healthcare cost, and even mortality in severe cases. Effective wound
care is crucial for promoting complete and timely healing and reducing the risk of complications that may
lead to amputation, infection, and other potentially life-threatening outcomes. This work aims to develop
a system that automizes to determine the wound boundaries leveraging the DeepLabV3+SE, measures the
wound characteristics such as size and area and wound shape using a pipeline of morphological operations
and connected component analysis modules. The proposed system’s performance was evaluated using the
publicly available dataset. Results demonstrate that the DeepLabV3+SE has outperformed with significantly
high dice and IOU scores of 0.923 and 0.924, respectively, compared with several state-of-the-art methods.
INDEX TERMS Connected Component Analysis, DeepLabV3+SE, Morphological Operator, Squeeze and
Excite, Wound Assessment, Wound Care.
I. INTRODUCTION
THE quality of life in millions of people worldwide
have significantly reduced as they suffer from acute and
chronic non-healing wounds [1]. The periodic examination
and effective treatment of the wounds are crucial for com-
plete and early wound recovery [2]. Ignoring this will result
in severe complications like limb amputations and death
[3], [4]. Also, the phases of healing in chronic wounds do
not progress in an orderly and prompt manner resulting in
hospitalization and requiring additional treatment, thereby
increasing the cost of health care services annually. This cost
in the United States alone was estimated to be around $96.8B
[3]. Given this, developing a wound management system has
become essential for chronic wound treatment.
A fundamental metric for a wound management system is
wound quantification, whose accuracy influences the diag-
nosis and effective treatment by the healthcare professionals
[5], [6]. Several research findings have highlighted the impor-
tance of wound measurement that aids in evaluating the heal-
ing trajectory, evaluating the effectiveness of the treatments,
and finding the future treatment plan for the patients, to name
a few [7]–[9]. Specifically, the wound area measurement pro-
vides an effective and reliable predictor of complete wound
healing [9] process, like rate of closure, time of closure,
and other insights. Usually, most healthcare professionals
employ visual assessment or 1D techniques like a ruler,
flexible ruler, or adhesive methods that hurt patients due to
infection risks and discomfort [5], [10]. This manual process
could be more precise, but it is time-consuming, and prone to
intra-varability. The above issues could be overcome through
standardization. The first aspect of standardization is wound
digitization, followed by wound segmentation and quantifi-
cation. The second aspect is automatization by exploiting
image processing and computer vision techniques paired
with artificial intelligence (A.I.) to monitor wound healing
at affordable prices continuously. Covid-19 pandemic-fueled
VOLUME 4, 2016 1
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
growth pushes the healthcare sectors towards digitization.
However, due to the complexities involved in the wound digi-
tization process, such as variable lighting conditions and time
constraints in clinical laboratories [11], wound segmentation
is still a demanding issue.
Currently, a few commercial products like Planimator
app [12], Visitrak device [13], and Silhouette Mobile de-
vices [14] are available that capture the wound image
using a dedicated camera system with an in-built algo-
rithm for wound assessment. However, these specialized
system faces the following challenges: a) Affordabilitynot
all clinics have access due to its cost, b) In-patient
visitstill demands the patient’s visit to the clinic; and c)
Accuracymeasurements are sporadic in nature. Theoreti-
cally, various research groups have attempted to solve the
segmentation issue through special sessions like the Foot
Ulcer challenge or selfinitiated wound segmentation work.
Based on the technical adoption aspect, wound segmentation
can be categorized into traditional [15]–[18], [24] and deep
learning(DL) approaches [19]–[23], [34]–[39]. The first cate-
gory focuses on combining image processing techniques with
or without machine learning approaches via hand-crafted
features on wound images. The Song et al. [24] used the
Multi-Layer Perceptron (MLP) and a Radial Basis Function
(RBF) neural network for the identification of wound region.
Hani et al. [15] generated hemoglobin-based images by ap-
plying an Independent Component Analysis (ICA) algorithm
on the pre-processed RGB images. The generated images
segment the granulation tissue from the wound images using
K-means clustering. The granulation tissue growth on the
wound bed is used to assess the early stages of wound
healing. Hettiarachchi et al. [16] applied energy minimizing
discrete dynamic contour algorithm on the saturation plane
of hue-saturation-value (HSV) color space of the image.
Then the enclosed contour is flood filled to estimate the
wound area. Fauzi et al. [17] used a modified HSV color
space input image and generated a Red-Yellow-Black-White
probability map, which aided the segmentation process. The
3D transformation approach attempted by Liu et al. [18] for
wound measurement using an integrated approach of struc-
ture from motion (SFM) and least squares conformal map-
ping (LSCM). In general, image processingbased wound
segmentation has bottlenecks such as: i) Features parameter
depends on the user or expertise experience. So, they are
prone to inter and intra-variabilities, ii) the hand-crafted
features are affected by illumination, image resolution, skin
pigmentation, camera angle, and so, and iii) they are not
immune to severe pathologies and rare cases that are very
impractical from a clinical perspective [19] , resulting in an
inferior performance for the hand-crafted feature approach.
Unlike traditional machine learning and image processing-
based methods, which make decisions based on hand-crafted
features, the DL-based methods combine feature extrac-
tion and decision-making. The superior performance of
AlexNet [25] on ImageNet classification has created much
traction among the research community, including but not
limited to semantic segmentation [26], [27], and medical im-
age analysis [28]. Wang et al. [29] used the vanilla fully con-
volutional neural network (FCN) architecture [26] to estimate
the wound area by segmenting the wounds. Then, the esti-
mated wound areas and the corresponding images are consid-
ered time-series data to predict the wound healing progress
using a Gaussian regression function model. Goyal et al.
[30] employed the FCN-16 architecture to classify the pixels,
whether it belongs to the wound or not, resulting in a wound
segmentation. Liu et al. [31] study attempted to replace
vanilla FCN decoder with a skip-layer concatenation up-
sampled with bilinear interpolation and appended pixel-wise
softmax layer at the last layer to get the segmented image.
Lu et al. [32] proposed a color correction and convolutional
neural network (CNN) model with a two-step pre-processing
pipeline to segment the overall wound without tissue seg-
mentation. Zahia et al. [23] proposed a CNN-based model
for tissue segmentation of pressure injury wounds with the
help of manual pre-processing steps. Godeiro et al. [33]
proposed a watershed algorithm for wound segmentation and
explored different CNN architectures like U-Net, Seg-Net,
FCN8, and FCN32, for the wound tissue classification. Goyal
et al. [34] proposed a diabetic foot ulcer network (DFUNet)
to classify healthy skin versus diabetic foot ulcers. Shenoy
et al. [35] proposed a deep wound algorithm using CNN
to classify nine different wound images. Pholberdee [22]
proposed a DL and data augmentation model to segment
each wound tissue separately. Cui et al. [36] proposed a
CNN for diabetic wound segmentation and probability maps
to remove artifacts. Alzubaidi et al. [37] proposed a CNN-
based DFU_QUTNet to classify diabetic foot ulcers versus
healthy skin from RGB color images. Chino et al. [38]
proposed an automatic skin ulcer assessment framework for
accurate wound segmentation and measurement. Sarp et al.
[39] proposed a hybrid wound segmentation and tissue clas-
sification algorithm by exploiting the conditional generative
adversarial network (GAN) to learn directly from data with-
out human knowledge. Scebba et al. [20] proposed a detect-
and-segment algorithm to produce wound segmentation maps
with high generalization capabilities. Anisuzzaman et al.
[21] presented an automated wound localizer based on the
YOLOv3 model followed by segmentation and classification.
Even though the DL approaches have improved the per-
formance of wound segmentation to some extent, the com-
plexities involved in the wound digitization process instruct
greater number of DL model parameters to accurately an-
alyze the wound images. Furthermore, there exists a gap
in estimating the wound parameters such as shape, length,
width, perimeter, circle diameter, for analyzing the wound,
which plays a crucial role in therapy. In addition, there
is no complete wound management system integrated with
the human-in-the-loop (HIL) module, as per the author’s
knowledge, which is one of the motivations of this work.
Considering the above facts, developing a robust system that
tackles these challenges is essential.
Our proposed system has the following technical contribu-
2VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
tions:
1) The complete wound management system framework
consists of patient and physician portals that amalgamate
personal information with clinical parameters.
a) Each module for the proposed system has been imple-
mented as an API for easy integration with the existing
electronic medical records (EMR), electronic health
records (EHR), or a standalone system.
b) From the security aspect, in the proposed system, the
wound data access has been authenticated using JSON
Web Token (JWT) and further the dataset is encoded
with the received JWT and stored in the cloud environ-
ment.
2) The state-of-the-art DeepLabV3+algorithm combined
with the Squeeze and Excite (SE) network leverages to
determine the multiple wound boundaries effectively; for
a given single image with multiple wounds.
3) Wound assessment module has been applied in the
pipeline to provide high-fidelity wound metrics or at-
tributes along with wound shape estimation, which
eases the physician’s workload toward understanding the
wound progression.
4) Self-learning of the DL module has been powered by the
human-in-the-loop approach using the interactive Grab-
Cut algorithm.
The rest of the paper is organized in the following way.
SectionII explains the wound management system, and
sectionIII presents the system’s outcome and briefly dis-
cusses the findings. SectionIV explains the system’s lim-
itations, and finally, we conclude this work with our future
focus in sectionV.
II. METHODOLOGY
A. WOUND MANAGEMENT SYSTEM’S OVERVIEW
The proposed wound management system (WMS) is de-
signed to assist the Physician / Health Care worker in
diagnosing and treating wounds. WMS system consists of
three (3) stakeholders, a) the Patient, b) the Healthcare
provider, and c) the Physician, as shown in Fig. 1. In the
WMS pipeline, the mobile phone is utilized for gathering
wound images, referred to as the patient portal. After de-
identification, captured wound images will get securely
uploaded to a dedicated healthcare provider cloud platform
for further analysis. Physician portal uses two-way commu-
nication, i.e., to get wound assessment results and to provide
feedback on the outcome (human-in-the-loop).
Fig. 2 shows the Wound Management system and its
pipeline components. Here, the patient portal is a user-
friendly mobile application for Android / iOS devices and
captures wound images using a mobile camera. Meta data
like Patient-ID, Patient-Name, time/date stamp, wound event,
and region tagging by the patient are attached with the wound
images to create a patient record or dataset. The individual
patient record is also stored locally in JSON format, thus
FIGURE 1. Functional Block Diagram Representation of Wound
Management System with Human-in-the-Loop Approach.
allowing the patient to add images to the existing wound
event and region tagging. Images changed since the last
upload are filtered and periodically prepared for subsequent
upload to the cloud server.
Connection to the healthcare provider cloud platform fol-
lows OAuth standard and uses a JSON Web Token (JWT)
for authorized data transfer. The patient record comprising
wound image and meta-data is encoded with the received
JWT and uploaded to the cloud portal via Rest API using
TLS (transport layer security) encryption. For feedback, the
application displays a chart of wound area Vs time and
few parameters after the Physician’s analysis, which will be
explained in detail at the result section III
The Physician portal is the centerpiece, where the wound
images are analyzed using DL and wound assessment (WA)
modules implemented in a cloud platform. The Physician
portal shows the patient details, wound image, and the an-
alyzed wound image with wound metrics and shape overlaid
on the wound image. The Physician portal exhibits the wound
measurements such as area, perimeter, and healing index
obtained from the WA module in a chart form for diagnosis.
Also, it displays the medications prescribed by the physician
and the wound history by plotting the wound metric over
time. The Physician portal has two components:
Front-end: It is designed using React Native and can
be browser-based or deployed on iPhone/Android phones.
This application interacts with the cloudhosted back-end
servers: DL-WA, and Database servers, as described in the
Back-end architecture (Fig. 2). These servers are accessed
using Rest API. Upon launching the portal front-end, an
authorized connection to the back-end servers gets estab-
lished, and the React application state gets populated with
a connection to the Database server. The user interface (UI)
design was evaluated for usability in a clinical situation
with physician input allowing easy comparison of wound
healing progression using actual wound images and charted
metrics. A touchscreen-based interactive environment marks
the wound areas and backgrounds for the Human-in-the-loop
VOLUME 4, 2016 3
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
FIGURE 2. Wound Management System Architecture
implementation.
FIGURE 3. Wound Management System Containerization
Back-end:Sitting behind a Rest API Database server and
DL-WA servers are implemented on a cloud platform using
Docker containers, and Nginx reverses proxy as shown in
the Fig. 3. After authentication following OAuth standards,
secure access to the Rest API is ensured using JWT. The
portal interacts with the Database and DL-WA servers via
the Rest API. Service on the health care cloud platform is
responsible for decoding the dataset and storing received data
to an instance of MySQL service. Secure and authorized
communication between services is established over the user-
defined-bridge network.
Operational Guide: The physician can open a patient
record using the Patient ID. Newly uploaded images are
available on the Physician portal front end for analysis and
review. The physician can also review wound history and
patient details relevant to wound diagnosis. Remarks made by
the physician and the calculated wound healing metrics are
stored in the Database. Additionally, the analyzed images get
uploaded to the cloud Database for future review. In addition,
the physician can edit the wound annotation boundary, which
is feedback to the DL server to complete the human-in-the-
loop mechanism.
B. DEEP LEARNING (DL) MODULE
In this work, the DeepLabV3+SE, with Resnet-50 as the
encoder, is explored for wound segmentation, and its block
diagram is shown in Fig. 4(a). The text in each gray block
in Fig. 4(a) represents the number of filters, the kernel
size, and a dilation rate of a convolutional layer followed
by a batch normalization and a Relu activation function.
The final activation function used before the segmentation
output is a Sigmoid function. As shown in Fig. 4(a), only
two outputs from "conv2_block3_2_relu" (denoted as A) and
"conv4_block6_2_relu" (denoted as B) layers of the Resnet-
50 network are used and hence, only used part of the Resnet-
50 architecture as shown in Fig. 4(b). Note that X2, X3,
and X5 mentioned in front of the dotted blocks inside the
conv2_block,conv3_block and conv4_block represent the
repetitions of the respective dotted blocks by 2, 3 and 5 times,
respectively.
In the Resnet-50 encoder, the weights are initialized with
the pre-trained weights obtained from the ImageNet dataset.
Compared to the random weight initialization, the pre-trained
weight initialization converges the model faster with better
4VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
(a) DeepLabV3+SE architecture with Resnet-50 as encoder
Input image(X)
(b) Resnet-50 architecture used as encoder in DeepLabV3+SE
FIGURE 4. Block diagram representation of the DeepLabV3+SE architecture with Resnet-50 as encoder
VOLUME 4, 2016 5
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
accuracy, thereby improving the efficiency and generalization
ability of the DeepLabV3+SE model. The output of the
"conv4_block6_2_relu" layer from the Resnet-50 is given to
Atrous Spatial Pyramid Pooling (ASPP) [40] consisting of
dilated convolutions with different dilation rates that help to
encode multi-scale contextual information. ASPP resamples
the feature layer with mapping at multiple rates before the
convolution layer. This approach efficiently captures the ob-
jects and image context with multiple filters and scales.
Further, the output of ASPP is bilinearly up-sampled by
a factor of 4 and then concatenated with the output of a
convolutional layer of 48 filters with a kernel size of 1x1.
The concatenated output is passed through a Squeeze and
Excitation (SE) network [41], where the inter-dependencies
between channels of the convolutional features are ex-
plicitly modeled to improve the representational power of
the DeepLabV3+network. In addition, it dynamically re-
calibrates the channel-wise features to emphasize informative
features and suppress the non-useful ones. The output of the
SE network is passed through the two convolutional layers
of 256 filters with 3x3 kernel size, followed by another
SE network. The output of the second SE network is up-
sampled with a bilinear interpolation by a factor of 4. The
up-sampled output is passed through a convolutional layer of
a single filter with a kernel size of 1x1, followed by a Sigmoid
activation function to get the segmentation output.
C. WOUND ASSESSMENT (WA) MODULE
Wound measurement is one of the crucial components in
the wound management system, the accuracy of which influ-
ences the diagnosis and treatment by healthcare profession-
als. Also, it is critical to evaluate the wound healing trajectory
and determine the future treatment. In addition, the wound
area gives an effective and reliable index of later complete
wound healing. These functions were accomplished by the
WA module that estimates the wound parameters, predicts
the wound shape, and overlays the wound metrics on the
input image along with the wound boundary. WA module
receives the segmented mask generated by the DL module,
and it is pre-processed with a threshold value to generate a
binary mask, which would be used for further processing.
WA module includes morphological operations, connected
component analysis, wound parameter estimation, and shape
analysis, as shown in Fig. 5.
Morphology operations are performed on the binary mask
to remove the tiny regions/spurious noises and to fill the
small holes within the wound to improve the true positive
rate. In a few cases, the deep learning network could identify
the blood stain as a wound, causing a small false-positive
region/noise in the segmented mask. This small false-positive
region is detected and removed by performing morphological
operations on the segmented mask. On the other hand, the
abnormal tissue, like fibrinous tissue inside the wound, could
be treated as a non-wound region by the network representing
it as small holes inside the segmented mask. These holes are
detected and filled by morphological operations.
FIGURE 5. Functional block diagram of wound assessment (WA) module
that enhances the segmentation outcome of the DeepLabV3+SE, and
estimates the wound metrics as well as finds the approximate shape of
the wound.
The connected component analysis is acclimated to label
the connected regions followed by measurement of those
labeled connected regions. The wound metrics, such as area,
perimeter, circle diameter, and major and minor axis length
of the ellipse, are used in conjunction with the shape analysis
algorithm to find the approximate shape of the wound. For
example, the eccentricity parameter of ellipse [42], circu-
larity [43], and rectangularity (extent) are adopted in the
shape analysis algorithm to determine the wound shape. The
eccentricity of ellipse eis calculated using Eq. (1); however,
the circle is a particular case when e= 0 [42].
e=distance between foci
length of major axis ,0e < 1(1)
As the name says, the circularity cmeasures the roundness
of the shape and is defined by
c= 4πA
r2(2)
where A= Area and r= Perimeter. For a perfect circle c=
1[43]. Rectangularity (extent) ψR[44] is calculated using
Eq. (3).
ψR=no. of pixels in ROI
no. of pixels in a boundary box (3)
Based on the values of these parameters, e,c, and ψR, the
approximate shape of the wound is determined. Finally, the
measured parameters and the wound boundary are overlaid
on the wound image.
6VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
D. HUMAN-IN-THE-LOOP
Human-in-the-loop has been introduced to improve the sys-
tem’s accuracy and enable the DL module’s self-learning
feature. In addition, our system will be validated by the
physician or user every time they use the tool, which is a dou-
ble verification process. For example, during the physician’s
investigation, if the Physician determines that the wound
segmentation outcome is inaccurate, he can rectify it via the
interactive GrabCut module [45] as shown in Fig. 6. Else
Physician approves the segmentation output, and the WA
module estimates the wound metrics that are overlaid on the
wound image along with the wound boundary for display
on the physician portal. As a process of personal informa-
tion integration, wound segmentation’s output with wound
metrics and its original wound image are stockpiled in the
Database server. On the other hand, when the physician opts
for correcting the wound boundary via GrabCut, he annotates
the foreground and background, and the system extracts
the boundaries. The process of annotating the foreground
and background is repeated till the Physician is satisfied
with the segmented boundary by the GrabCut. The corrected
segmentation output and its original wound are accumulated
in the server. Once the number of accumulated physician
corrected segmented images exceeds the set threshold value
then the DL module’s retraining is initiatied automatically,
thereby leveraging the self-learning technique. The accumu-
lated segmentation outputs obtained by the GrabCut with
the physician’s input will function as the ground truth for
retraining, which empower the DL module accuracy.
III. RESULTS AND DISCUSSION
The front-end GUI and workflow of WMS are shown in Fig.
7. The physician can view the patient details and wound
history relevant to the wound diagnosis by opening a patient
record using a Patient ID. Charts and wound progress section
of the physician portal shown in Fig. 7 represents the wound
measurement history and the set of wound images, respec-
tively, for making observations about the wound healing
progress. The DL and WA modules were implemented in
Python with Keras and TensorFlow as back-end and trained
and validated on HPC-A3 TCS (TATA Consultancy Services)
server powered by NVIDIA DGX-A100 series. The DL
and WA modules reside in the API server, and the trained
DeepLabV3+SE model, the patient’s information data, and
the accumulated segmentation outputs from GrabCut and
their respective wound images are stored in the Database
server.
A. DATASET
In this work, we have used the publicly available AZH
Wound care dataset [46] and the Foot Ulcer Challenge (FUC)
Segmentation dataset [46]. Both datasets are merged, which
results in a total of 1841 images, and for the homogeneous-
ness aspect, the images are resized to 256×256. Further-
more, the image augmentation techniques, such as brightness
and saturation improvement, hue and rotation, cut mix and
FIGURE 6. Flow diagram of Human-in-the-loop.
mixup, and horizontal and vertical flips, were adapted to
increase the training dataset from 1675 to 55,275 wound
images, which is more extensive than the DFUC2021 chal-
lenge dataset [47]. The validation and testing sets had 166
and 278 images, respectively, and have been used without
augmentation for validation and testing of the model.
B. MODEL PERFORMANCE EVALUATION METRICS
In this study, the performance was evaluated using the evalu-
ation metrics like Dice Score, Intersection over Union (IoU),
Precision, and Recall, which are given below for complete-
ness.
DiceScore =2×TP
2×TP +FP +FN (4)
IOU =TP
TP +FP +FN (5)
P recision =TP
TP +FP (6)
Recall =TP
TP +FN (7)
C. DL RESULTS
In DL training, the learning rate was initialized as 0.0001, and
each mini-batch size of 16 images was adapted, considering
the trade-off between the training accuracy and efficiency. On
the one hand, training loss quantification infers how well the
model fits the training data, and on the other hand, validation
VOLUME 4, 2016 7
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
FIGURE 7. Front end GUI and workflow of the Wound Management System.
loss assesses how well the model fits the new data. In order to
have a balanced training by avoiding under-fitting and over-
fitting, both losses should be watched during the training.
Hence, the training is terminated early when both the losses
converged and there was no significant improvement in the
validation loss for more than 10 epochs, as shown in Fig.
8. Adam’s optimization algorithm was adapted to update
the network parameters. The number of trainable parameters
of DeepLabV3 and DeepLabV3+SE with that of other
state-of-the-art methods and their performance comparisons
in terms of evaluation metrics, Dice, IOU, Precision, and
Recall, are given in Table 1. To evaluate the performance of
the DL module, the testing set of AZH wound care dataset
[46] is used in both DeepLabV3 and DeepLabV3+SE.
The number of trainable parameters and evaluation metrics
of VGG-16, SegNet, U-Net, Mask-RCNN, MobileNetV2,
and MobileNetV2+CCL, and Ensemble of CNNs (ECNN),
which are evaluated on the same testing dataset, are reported
from [19] and [48], respectively.
TABLE 1. Evaluation Metrics of DeepLabV3+SE with Resnet-50 Encoder
Model # Params Dice IOU Pre Recall
VGG16 [19] 134,264,641 0.810 - 0.839 0.783
SegNet [19] 902,561 0.851 - 0.836 0.865
U-Net [19] 4,834,839 0.901 - 0.890 0.913
Mask-RCNN [19] 63,621,918 0.902 - 0.943 0.864
MobileNetV2 [19] 2,141,505 0.903 - 0.908 0.897
MobileNetV2+CCL [19] 2,141,505 0.905 - 0.910 0.899
ECNN [48] - 0.921 0.855 0.927 0.918
DeepLabV3+ 11,868,481 0.919 0.920 0.964 0.876
DeepLabV3+SE
(Proposed) 11,891,585 0.923 0.924 0.960 0.883
The result finding infers that the proposed method
(DeepLabV3+SE) outperforms DeepLabV3+, ECNN, Mo-
bileNetV2+CCL, MobileNetV2, Mask-RCNN, U-Net, Seg-
Net and VGG16 in the performance measure, the Dice score,
by 0.4%, 0.2%, 1.9%, 2.2%, 2.3%, 2.4%, 8.4% and 13.9%,
respectively. Similarly, in the second performance metric-
IoU, the DeepLabV3+SE outperforms DeepLabV3+ and
ECNN by 0.4% and 8.0%, respectively. In terms of Precision,
DeepLabV3+SE outperforms ECNN, MobileNetV2+CCL,
MobileNetV2, Mask-RCNN, U-Net, SegNet and VGG16
by 3.5%, 5.5%, 5.7%, 1.8%, 7.8%, 14.8% and 14.4%,
respectively, and under-performs DeepLabV3+ by 0.4%.
However, DeepLabV3+SE under-performs ECNN, Mo-
bileNetV2+CCL, MobileNetV2 and U-Net by 3.8%, 1.7%,
1.5% and 3.3%, respectively, and outperforms DeepLabV3+,
Mask-RCNN, SegNet and VGG16 by 0.8%, 2.2%, 2.1% and
12.7%, respectively, in-terms of Recall. Generally, the higher
the number of trainable parameters, the better the model’s
performance. However, it is not so concerning in Mask-
RCNN and VGG16, which require 5.5 and 11.3 times higher
parameters than DeepLabV3+SE, respectively, and still un-
derperform. On the other hand, SegNet, both MobileNetV2
and MobileNetV2+CCL, and U-Net need 13, 5.5, and 2.4
times lower parameters than DeepLabV3+SE, respectively,
and performed better than DeepLabV3+SE, except SegNet,
only in terms of Recall. Considering the trade-off between
the performance and number of trainable parameters, the
proposed DeepLabV3+SE performs better in terms of Dice
and IOU scores than others, regardless of the chronic wound
segmentation dataset. Thus, it is evident that our model is
robust and unbiased for wound segmentation with a single
image that has been captured via a single camera system.
8VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
Besides this, our system pipeline consists of a WA module to
estimate the wound parameters and to find the approximate
wound shape from the sequence of DeepLabV3+SE’s output
images.
FIGURE 8. Training and Validation Loss metrics of DeepLabV3+SE.
D. WOUND ASSESSMENT MODULE RESULTS
The wound images, segmented masks obtained from
DeepLabV3+SE, and the respective post-processed outputs
are shown in Fig. 9. In Fig. 9, (a1)-(c1) shows the original
wound images, (a2)-(c2) shows the segmented masks ob-
tained from DeepLabV3+SE, and (a3)-(c3) shows the out-
puts after post-processing. The holes in the segmented masks
obtained from DeepLabV3+SE are represented with a red
box in Fig. 9 (a2) and (b2), and the spurious noise is shown
with a yellow box in Fig. 9(c2). These holes and the noises
are removed after post-processing, as shown in Fig. 9(a3)-
(c3). In Fig. 9 (c1,c2) and (c3), the green boxes represent
the zoomed version of the wound image, segmented masks
obtained from DeepLabV3+SE, and the post-processed out-
put, respectively, to show the removal of spurious noise.
The post-processing carried out after image segmentation in
[19] uses only connected component labeling to remove the
holes and noises in the wound region and outside the ROI,
respectively. In contrast, our method uses morphological
operations to remove noises and holes, and the connected
component analysis for labeling the wound regions. These
labeled connected regions are used for estimating wound
dimensions such as width, length, circle diameter, major and
minor axis length of the ellipse, area, and perimeter. These
measurements are used in conjunction with shape analysis
to find the approximate shape of the wound, as shown in
Fig. 10 for the test images. In Fig. 10, the wound’s nearest
shape and its respective parameter measurements are overlaid
on the wound image. For example, the wound images (i &
iii) in the first row, the wound shape is an ellipse and its
parameters, the major axis length (a), and minor axis length
of the ellipse (b) are overlaid along with area and perimeter
of the wound. If the approximate wound shape is a rectangle,
the width (W), height (H), area and perimeter are overlaid on
the wound image. Similarly, in case of circle shape wound,
FIGURE 9. Wound Assessment module’s segmentation outcome with
color code for some test images. Here (a1)-(c1) represents the original
image, (a2)-(c2) are the segmented outputs from DeepLabV3+SE, and
(a3)-(c3) are the outputs of post-processing. Note that red boxes in (a2)
and (b2) show holes, and the yellow box in (c2) depicts spurious noise.
The green box in (c1), (c2), and (c3) shows the zoomed wound area and
its masks from DeepLabV3+SE and after post-processing, respectively.
FIGURE 10. Wound Measurements such as area, perimeter, and shape
(rectangle, circle, and ellipse) are given with color code visual layout
representing the wound boundary for some test images. Green and
Yellow color visual layouts represent the first and second wounds in the
original wound image, respectively.
the diameter, area, and perimeter are indicated for the circular
wound image.
E. HUMAN-IN-THE-LOOP RESULTS
The images obtained during the GrabCut process are shown
in Fig. 11. This process is interactive and iterative, as shown
in Fig. 6, which directly depends on the user’s feedback. For
example, Fig. 11 (a) and (b) represent the GrabCut annotation
process (Foreground and Background marking in Blue and
Green colors, respectively) and its corresponding output.
Suppose the user or physician is unsatisfied or neglects a
region to annotate, then the next iteration could be repeated
as represented in Fig. 11 (c,d), and (e,f), which results in
improving the accuracy of the boundary region.
VOLUME 4, 2016 9
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
FIGURE 11. Images obtained during the interactive GrabCut. (a), (c) and
(e) represent the Images with Foreground and Background markings for
1st,2nd , and 3rd iterations, respectively. Note that foreground and
background are marked with blue and green colors, respectively. (b), (d)
and (f) show the original wound image with the boundary marked in
Yellow color as an outcome of the GrabCut process.
IV. LIMITATION
In this study, due to the privacy concern, the publicly avail-
able AZH Wound care [46] and the Foot Ulcer Challenge
Segmentation [47] datasets have been used for training the
DeepLabV3+SE model. The wound images in these datasets
are captured with different image-capturing devices without
camera calibration. As the camera was not calibrated during
the image capture, the mapping from the digital world in
terms of pixels to the physical units is impossible. Due to
this limitation, wound measurements are presented in terms
of pixels (digital domain) rather than the physical units in Fig.
10. The above limitation could be addressed by calibrating
the camera before or while capturing the wound images.
Thus, the wound measurements in pixels can be mapped to
the real-world physical units using the scaling factor obtained
during the camera calibration.
V. CONCLUSION
In this paper, the wound management system using
DeepLabV3+SE network for wound segmentation outper-
formed with a significantly high Dice score and IOU in the
automated wound boundary detection. In addendum, quanti-
tative assessment of wound parameters and prediction of the
wound shape demonstrates that it generally works well in a
diverse environment. HIL module provides input to the self-
learning process, i.e., it updates the DeepLabV3+SE model
over a period with the help of a physician’s feedback. A new
model is deployed after retraining whenever the number of
segmented images; the Physician corrects surpasses the set
threshold value. Thus, our proposed system has the following
potential benefits/advantages:
1) Robustness: A single camera system can handle any type
of wound images captured in different environments with
challenges like uneven illumination, low image resolu-
tion, different image shape, skin pigmentation, and so on,
that occur individually or in combination.
2) Wound assessment module helps the Physician to under-
stand the progress of the wound and analyze the effect
of medication on the wound healing over a period of
time. Having all the information on wound metrics and
medications at his disposal, he can take the necessary
action for faster wound healing.
3) Self-learning of the DL module due to the exploitation
of the human-in-the-loop approach improves the accuracy
and performance of the system.
4) The proposed telemedicine system will enhance the per-
vasive wound monitoring and care management strategies
for the Physician and the Patient.
Adopting our proposed system in the continuum of care-
remote monitoring will enhance the Physician’s pervasive
wound diagnosis and management strategies, and the patient
in a rural area or far-off distance can have access to an
early, improved, and cost-effective diagnosis. In future work,
we are focus on removing the dependency on masks or
annotations by using either semi-supervised or unsupervised
networks. In addition, the data security model integration
with HIPAA standards compliances will be adopted to com-
plete the system’s full functionality.
ACKNOWLEDGMENT
The authors acknowledge the HPC–TCS B&TS team for pro-
viding us with the ultra-modern HPC infrastructure, which
accelerated and supported this work immensely.
REFERENCES
[1] Robert G. Frykberg and Jaminelli Banks. Challenges in the treatment of
chronic wounds. Adv. in Wound Care, 4(9):560–582, 2015.
[2] George Han and Roger Ceilley. Chronic wound healing: a review of
current management and treatments. Adv. in Therapy, 34(3):599–610,
2017.
[3] Chandan K Sen. Human wounds and its burden: an updated compendium
of estimates. Adv. in Wound Care, 8(2):39–48, 2019.
[4] Julia Escandon, Alejandra C Vivas, Jennifer Tang, Katherine J Rowland,
and Robert S Kirsner. High mortality in patients with chronic wounds.
Wound Repair and Regeneration, 19(4):526–528, 2011.
[5] Lawrence A Lavery, Sunni A Barnes, Michael S Keith, John W Seaman Jr,
and David G Armstrong. Prediction of healing for postoperative diabetic
foot wounds based on early wound area progression. Diabetes Care,
31(1):26–29, 2008.
[6] Stephan Coerper, Stefan Beckert, Markus A Küper, Martin Jekov, and
Alfred Königsrainer. Fifty percent area reduction after 4 weeks of
treatment is a reliable indicator for healing—analysis of a single-center
cohort of 704 diabetic patients. J. of Diabetes and its Complications,
23(1):49–53, 2009.
[7] Karen Ousey and Leanne Cook. Understanding the importance of holistic
wound assessment. Practice nursing, 22(6):308–314, 2011.
[8] Linda Russell. The importance of wound documentation and classification.
Brit. J. of Nursing, 8(20):1342–1354, 1999.
[9] Matthew Cardinal, David E Eisenbud, Tania Phillips, and Keith Harding.
Early healing rates and wound area measurements are reliable predictors of
later complete wound closure. Wound repair and regeneration, 16(1):19–
22, 2008.
10 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
[10] Kyle L Wu, Katie Maselli, Anna Howell, Daniel Gerber, Emmanuel
Wilson, Patrick Cheng, Peter Cw Kim, and Ozgur Guler. Feasibility of 3d
structure sensing for accurate assessment of chronic wound dimensions.
Int. J. Comput. Assist. Radiol. Surgery, 10(1):13–14, 2015.
[11] Christos P Loizou, Takis Kasparis, and Michalis Polyviou. Evaluation of
wound healing process based on texture image analysis. J. of Biomed.
Graph. and Comput., 3(3):1–13, 2013.
[12] Foltynski P. Ways to increase precision and accuracy of wound area
measurement using smart devices: Advanced app planimator. PLoS ONE,
13(3):e0192485, https:// doi.org/10.1371/journal.pone.0192485 2018.
[13] Sylvie Treuillet, Benjamin Albouy, and Yves Lucas. Three-dimensional
assessment of skin wounds using a standard digital camera. IEEE Trans-
actions on Medical Imaging, 28(5):752–762, 2009.
[14] Urban Pavlovˇ
ciˇ
c, Janez Diaci, Janez Možina, and Matija Jezeršek. Wound
perimeter, area, and volume measurement based on laser 3d and color
acquisition. BioMedical Engineering OnLine, 14(1):39, 2015.
[15] Ahmad Fadzil M. Hani, Leena Arshad, Aamir Saeed Malik, Adawiyah
Jamil, and Felix Boon Bin. Haemoglobin distribution in ulcers for
healing assessment. 2012 4th International Conference on Intelligent and
Advanced Systems (ICIAS2012), 1:362–367, 2012.
[16] N. D. J. Hettiarachchi, R. B. H. Mahindaratne, G. D. C. Mendis, H. T.
Nanayakkara, and Nuwan Dayananda Nanayakkara. Mobile based wound
measurement. 2013 IEEE Point-of-Care Healthcare Technologies (PHT),
pages 298–301, 2013.
[17] Mohammad Faizal Ahmad Fauzi, Ibrahim Khansa, Karen Catignani,
Gayle Gordillo, C. Sen, and Metin Nafi Gürcan. Computerized segmen-
tation and measurement of chronic wound images. Computers in biology
and medicine, 60:74–85, 2015.
[18] Chunhui Liu, Xingyu Fan, Zhizhi Guo, Zhongjun Mo, Eric I-Chao Chang,
and Yan Xu. Wound area measurement with 3d transformation and
smartphone images. BMC Bioinformatics, 20, 2019.
[19] Chuanbo Wang, D M Anisuzzaman, Victor Williamson, Mrinal Kanti
Dhar, Behrouz Rostami, Jeffrey Niezgoda, Sandeep Gopalakrishnan, and
Zeyun Yu. Fully automatic wound segmentation with deep convolutional
neural networks. Sci Rep, 10(1):21897, Dec 2020.
[20] Gaetano Scebba, Jia Zhang, Sabrina Catanzaro, Carina Mihai, Oliver
Distler, Martin Berli, and Walter Karlen. Detect-and-segment: a deep
learning approach to automate wound image segmentation. Informatics
in Med. Unlocked, 29:100884, 2022.
[21] DM Anisuzzaman, Yash Patel, Jeffrey A Niezgoda, Sandeep Gopalakr-
ishnan, and Zeyun Yu. A mobile app for wound localization using deep
learning. IEEE Access, 10:61398–61409, 2022.
[22] Nanthipath Pholberdee. Wound-region segmentation from image by using
deep learning and various data augmentation methods. PhD thesis,
Silpakorn University, 2019.
[23] Sofia Zahia, Daniel Sierra-Sosa, Begonya Garcia-Zapirain, and Adel El-
maghraby. Tissue classification and segmentation of pressure injuries
using convolutional neural networks. Comput. Methods and Programs in
Biomed., 159:51–58, 2018.
[24] Bo Song and Ahmet Sacan. Automated wound identification system based
on image segmentation and artificial neural networks. In 2012 IEEE
International Conference on Bioinformatics and Biomedicine, pages 1–4,
2012.
[25] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet
classification with deep convolutional neural networks. In Adv. in Neural
Inf. Process. Syst., pages 1097–1105, 2012.
[26] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional
networks for semantic segmentation. In Proc. of the IEEE Conf. on
Comput. Vis. and Pattern Recognit. (CVPR), pages 3431–3440, 2015.
[27] Alberto Garcia-Garcia, Sergio Orts-Escolano, Sergiu Oprea, Victor
Villena-Martinez, and Jose Garcia-Rodriguez. A review on deep
learning techniques applied to semantic segmentation. arXiv preprint
arXiv:1704.06857, 2017.
[28] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud
Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian,
Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I Sánchez. A
survey on deep learning in medical image analysis. Med. Image Anal.,
42:60–88, 2017.
[29] Changhan Wang, Xinchen Yan, Max Smith, Kanika Kochhar, Marcie
Rubin, Stephen M Warren, James Wrobel, and Honglak Lee. A unified
framework for automatic wound segmentation and analysis with deep
convolutional neural networks. Annu Int Conf IEEE Eng Med Biol Soc,
2015:2415–2418, 2015.
[30] Manu Goyal, Moi Hoon Yap, Neil D Reeves, Satyan Rajbhandari, and
Jennifer Spragg. Fully convolutional networks for diabetic foot ulcer
segmentation. In 2017 IEEE Int. Conf. on Syst., man, and cybernetics
(SMC), pages 618–623, 2017.
[31] Xiaohui Liu, Changjian Wang, Fangzhao Li, Xiang Zhao, En Zhu, and
Yuxing Peng. A framework of wound segmentation based on deep
convolutional networks. In 2017 10th Int. Congress on Image and Signal
Process., Biomed. Eng. and Informatics (CISP-BMEI), pages 1–7, 2017.
[32] Huimin Lu, Bin Li, Junwu Zhu, Yujie Li, Yun Li, Xing Xu, Li He,
Xin Li, Jianru Li, and Seiichi Serikawa. Wound intensity correction
and segmentation with convolutional neural networks. Concurrency and
Computation: Practice and Experience, 29(6):e3927, 2017.
[33] Vitor Godeiro, José Silva Neto, Bruno Carvalho, Bruno Santana, Julianny
Ferraz, and Renata Gama. Chronic wound tissue classification using
convolutional networks and color space reduction. In 2018 IEEE 28th
Int. Workshop on Mach. Learning for Signal Process. (MLSP), pages 1–6,
2018.
[34] Manu Goyal, Neil D Reeves, Adrian K Davison, Satyan Rajbhandari,
Jennifer Spragg, and Moi Hoon Yap. Dfunet: Convolutional neural
networks for diabetic foot ulcer classification. IEEE Trans. on Emerging
Topics in Computational Intell., 4(5):728–739, 2018.
[35] Varun N Shenoy, Elizabeth Foster, Lauren Aalami, Bakar Majeed, and
Oliver Aalami. Deepwound: Automated postoperative wound assessment
and surgical site surveillance through convolutional neural networks. In
2018 IEEE Int. Conf. on Bioinformatics and Biomed. (BIBM), pages
1017–1021, 2018.
[36] Can Cui, Karl Thurnhofer-Hemsi, Reza Soroushmehr, Abinash Mishra,
Jonathan Gryak, Enrique Domínguez, Kayvan Najarian, and Ezequiel
López-Rubio. Diabetic wound segmentation using convolutional neural
networks. In 2019 41st Annual Int. Conf. of the IEEE Eng. in Med. and
Biol. Society (EMBC), pages 1002–1005, 2019.
[37] Laith Alzubaidi, Mohammed A Fadhel, Sameer R Oleiwi, Omran Al-
Shamma, and Jinglan Zhang. Dfu_qutnet: diabetic foot ulcer classification
using novel deep convolutional neural network. Multimedia Tools and
Appl., 79(21):15655–15677, 2020.
[38] Daniel YT Chino, Lucas C Scabora, Mirela T Cazzolato, Ana ES Jorge,
Caetano Traina-Jr, and Agma JM Traina. Segmenting skin ulcers and
measuring the wound area using deep convolutional networks. Comput.
Methods and Programs in Biomed., 191:105376, 2020.
[39] Salih Sarp, Murat Kuzlu, Manisa Pipattanasomporn, and Ozgur Guler.
Simultaneous wound border segmentation and tissue classification using a
conditional generative adversarial network. The J. of Eng., 2021(3):125–
134, 2021.
[40] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy,
and Alan L Yuille. Deeplab: Semantic image segmentation with deep
convolutional nets, atrous convolution, and fully connected crfs. IEEE
Trans. on Pattern Anal. and Mach. Intell., 40(4):834–848, 2017.
[41] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proc.
of the IEEE Conf. on Comput. Vis. and Pattern Recognit. (CVPR), pages
7132–7141, 2018.
[42] Craig A Kluever. Spaceflight Mechanics, Encyclopedia of Physical
Science and Technology. Academic Press, 2003.
[43] Lijun Sun. Structural behavior of asphalt pavements. Butterworth-
Heinemann, 2016.
[44] Scikit-image. Scikit-image library. https://scikit-
image.org/docs/stable/api/skimage.measure.html.
[45] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. Grabcut:
Interactive foreground extraction using iterated graph cuts. In ACM
SIGGRAPH 2004 Papers, pages 309–314, 2004.
[46] GitHub. Github repository. https://github.com/uwm-bigdata/wound-
segmentation, 2020.
[47] Bill Cassidy, Connah Kendrick, Neil D Reeves, Joseph M Pappachan,
Claire O’Shea, David G Armstrong, and Moi Hoon Yap. Diabetic foot
ulcer grand challenge 2021: evaluation and summary. In Diabetic Foot
Ulcers Grand Challenge, pages 90–105, 2021.
[48] Amirreza Mahbod, Gerald Schaefer, Rupert Ecker, and Isabella Ellinger.
Automatic foot ulcer segmentation using an ensemble of convolutional
neural networks. In 2022 26th Int. Conf. on Pattern Recogn. (ICPR, pages
1–6, 2022.
VOLUME 4, 2016 11
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Shreyamsha et al.: Wound Care: Wound Management System
B. K. SHREYAMSHA KUMAR (Member, IEEE)
received the B.E. degree in electronics and com-
munication engineering from Bangalore Univer-
sity, Karnataka, India, in 2000, and the M.Tech.
degree in industrial electronics from the National
Institute of Technology Karnataka, Surathkal, Kar-
nataka, India, in 2004, and the Ph.D. degree in
electrical and computer engineering from Con-
cordia University, Montreal, Quebec, Canada, in
2019.
From 2004 to 2012, he was with the Central Research Laboratory (A
Corporate Research Facility of Bharat Electronics), Bangalore, India as
a member (Research Staff), and from 2012 to 2019, he was a Research
Associate with the Signal Processing Group, Department of Electrical and
Computer Engineering, Gina Cody School of Engineering and Computer
Science at Concordia University, Montreal, Quebec, Canada. He is cur-
rently working at Digital Medicine and Medical Technology Unit, Business
Transformation Group of TATA Consultancy Services as a Scientist. He has
published several papers in peer-reviewed Journals and major Conferences.
His research interests include computer vision, visual tracking, image fusion,
image denoising, image encryption, medical image processing, and docu-
ment image processing.
Dr. Shreyamsha was a recipient of the R&D Excellence Award conferred
by Bharat Electronics, India. He was a team member of a project which
received Raksha Mantri’s Award for Excellence in “Innovation” category
for the period 2008-2009 on Nov 2010 from Hon’ble Raksha Mantri,
Ministry of Defence, Government of India. He is also a recipient of several
awards from Concordia University and the high-status Doctoral Research
Merit Scholarship for Foreign Students from Ministère de l’Éducation, de
l’Enseignement Supérieur et de la Recherche (MEESR) du Québec, during
his doctoral studies at Concordia University. He has served as a reviewer for
several journals and major conferences.
ANANDAKRISHNAN KC is currently pursuing
a Ph.D. in Computer Science from Amritha Vish-
wavidyapeetham, Coimbatore. He is a "Machine
Learning Solution Architect" at Tata Consultancy
Services.
MANISH SUMANT received the M.S. degree in
Biomedical Engineering from Washington Univer-
sity in St. Louis, 1995. He has worked on medical
device development focusing on software tech-
nologies for embedded devices, diagnostic soft-
ware and cloud-based solutions. He has a proven
record of implementing research technologies and
algorithms into innovative solutions. Currently
working as a Solutions Architect at TCS in the
Digital Medicine and Medical Technologies unit.
SRINIVASAN JAYARAMAN (Member, IEEE) is
the Principal Scientist and Head of the BioCom-
putational and Imaging Program, TCS Research,
Digital Medicine and Medical Technologies Unit,
BTG, Cincinnati, USA. During his sabbatical, he
worked as a Visiting Scholar at MCCHE, Desau-
tels Faculty of Management, McGill, Montreal,
Canada, in July 2017; a Research Fellow, Énergie
Matériaux Télécommunications Research Centre
(NRS-EMT), Montreal, Canada, in July 2016;
Post-doctoral fellow (Scientific Manager) at University of Nebraska Lincoln
(ULN), USA and New Jersey Institute of Technology (NJIT), USA in 2013.
Dr.Srinivasan has 7 international patents granted, 6 International patents
published, 5 International patents filed, 4 Book chapters, and more than
30 publications and chaired international conferences and workshops. His
research interest includes the establishment of Digital BioTwin of human
organs, Biosignal processing, cardiac computational model, Human Per-
formance and Behavioral modelling, Ontology, AI, Personalized Diagnosis
System, Wearable devices, and Medical Device Development.
Dr.Srinivasan’s works on ECG for person identification and authentication
won the MIT TR35 (Young Innovator) 2011 award from MIT Review India
edition. In the same year, another work on Portable Cardiac Devices won the
“Dare to Try” TATA Innovista 2011 award. In addition, World CSR Congress
& World CSR Day recognized his work on ECG as a Biometric system for
individual identification as one of the 50 most socially impactful innovators
(global listing) in 2016.
12 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3271011
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Article
Full-text available
Diagnosing burns in humans has become critical, as early identification can save lives. The manual process of burn diagnosis is time-consuming and complex, even for experienced doctors. Machine learning (ML) and deep convolutional neural network (CNN) models have emerged as the standard for medical image diagnosis. The ML-based approach typically requires handcrafted features for training, which may result in suboptimal performance. Conversely, DL-based methods automatically extract features, but designing a robust model is challenging. Additionally, shallow DL methods lack long-range feature dependency, decreasing efficiency in various applications. We implemented several deep CNN models, ResNeXt, VGG16, and AlexNet, for human burn diagnosis. The results obtained from these models were found to be less reliable since shallow deep CNN models need improved attention modules to preserve the feature dependencies. Therefore, in the proposed study, the feature map is divided into several categories, and the channel dependencies between any two channel mappings within a given class are highlighted. A spatial attention map is built by considering the links between features and their locations. Our attention-based model BuRnGANeXt50 kernel and convolutional layers are also optimized for human burn diagnosis. The earlier study classified the burn based on depth of graft and non-graft. We first classified the burn based on the degree. Subsequently, it is classified into graft and non-graft. Furthermore, the proposed model performance is evaluated on Burns_BIP_US_database. The sensitivity of the BuRnGANeXt50 is 97.22% and 99.14%, respectively, for classifying burns based on degree and depth. This model may be used for quick screening of burn patients and can be executed in the cloud or on a local machine. The code of the proposed method can be accessed at https://github.com/dhirujis02/Journal.git for the sake of reproducibility.
Chapter
IntroductionCase ScenarioSkin AssessmentThe Ageing SkinDiabetes MellitusPeripheral NeuropathyPeripheral Vascular DiseaseRheumatic DiseasesImmunosuppressionHuman Immunodeficiency Virus/AIDSSkin AllergiesState of ContinenceSkin Management StrategiesCase Study RevisitedConclusion References
Article
Full-text available
We present an automated wound localizer from 2D wound and ulcer images using a deep neural network as the first step towards building an automatic and complete wound diagnostic system. The wound localizer is developed using the YOLOv3 model, which is then turned into an iOS mobile application. The developed localizer can detect the wound and its surrounding tissues and isolate the localized wounded region from images, which would help future processing such as wound segmentation and classification due to the removal of unnecessary regions from wound images. For Mobile App development with video processing, a lighter version of YOLOv3 named tiny-YOLOv3 has been used. The model is trained and tested on our image dataset, collaborating with AZH Wound and Vascular Center, Milwaukee, Wisconsin. The YOLOv3 model is compared with the SSD model in our developed dataset (AZH wound database, available at https://github.com/uwm-bigdata/wound_localization ), showing that YOLOv3 gives an mAP value of 93.9%, which is much better than the SSD model (86.4%). The robustness and reliability of these models are also tested on a publicly available dataset named Medetec, which shows very good performance. Finally, a bigger dataset (BMAZHM Wound Database) is created, and the developed model is retrained with the new dataset to achieve a very high mAP of 97.3%.
Article
Full-text available
Abstract Generative adversarial network (GAN) applications on medical image synthesis have the potential to assist caregivers in deciding a proper chronic wound treatment plan by understanding the border segmentation and the wound tissue classification visually. This study proposes a hybrid wound border segmentation and tissue classification method utilising conditional GAN, which can mimic real data without expert knowledge. We trained the network on chronic wound datasets with different sizes. The performance of the GAN algorithm is evaluated through the mean squared error, Dice coefficient metrics and visual inspection of generated images. This study also analyses the optimum number of training images as well as the number of epochs using GAN for wound border segmentation and tissue classification. The results show that the proposed GAN model performs efficiently for wound border segmentation and tissue classification tasks with a set of 2000 images at 200 epochs.
Article
Full-text available
Acute and chronic wounds have varying etiologies and are an economic burden to healthcare systems around the world. The advanced wound care market is expected to exceed $22 billion by 2024. Wound care professionals rely heavily on images and image documentation for proper diagnosis and treatment. Unfortunately lack of expertise can lead to improper diagnosis of wound etiology and inaccurate wound management and documentation. Fully automatic segmentation of wound areas in natural images is an important part of the diagnosis and care protocol since it is crucial to measure the area of the wound and provide quantitative parameters in the treatment. Various deep learning models have gained success in image analysis including semantic segmentation. This manuscript proposes a novel convolutional framework based on MobileNetV2 and connected component labelling to segment wound regions from natural images. The advantage of this model is its lightweight and less compute-intensive architecture. The performance is not compromised and is comparable to deeper neural networks. We build an annotated wound image dataset consisting of 1109 foot ulcer images from 889 patients to train and test the deep learning models. We demonstrate the effectiveness and mobility of our method by conducting comprehensive experiments and analyses on various segmentation neural networks. The full implementation is available at https://github.com/uwm-bigdata/wound-segmentation.
Article
Full-text available
Background: Quantitative areas is of great measurement of wound significance in clinical trials, wound pathological analysis, and daily patient care. 2D methods cannot solve the problems caused by human body curvatures and different camera shooting angles. Our objective is to simply collect wound areas, accurately measure wound areas and overcome the shortcomings of 2D methods. Results: We propose a method with 3D transformation to measure wound area on a human body surface, which combines structure from motion (SFM), least squares conformal mapping (LSCM), and image segmentation. The method captures 2D images of wound, which is surrounded by adhesive tape scale next to it, by smartphone and implements 3D reconstruction from the images based on SFM. Then it uses LSCM to unwrap the UV map of the 3D model. In the end, it utilizes image segmentation by interactive method for wound extraction and measurement. Our system yields state-of-the-art results on a dataset of 118 wounds on 54 patients, and performs with an accuracy of 0.97. The Pearson correlation, standardized regression coefficient and adjusted R square of our method are 0.999, 0.895 and 0.998 respectively. Conclusions: A smartphone is used to capture wound images, which lowers costs, lessens dependence on hardware, and avoids the risk of infection. The quantitative calculation of the 3D wound area is realized, solving the challenges that 2D methods cannot and achieving a good accuracy.
Article
Full-text available
Diabetic Foot Ulcer (DFU) is the main complication of Diabetes, which, if not properly treated, may lead to amputation. One of the approaches of DFU treatment depends on the attentiveness of clinicians and patients. This treatment approach has drawbacks such as the high cost of the diagnosis as well as the length of treatment. Although this approach gives powerful results, the need for a remote, cost-effective, and convenient DFU diagnosis approach is urgent. In this paper, we introduce a new dataset of 754-ft images which contain healthy skin and skin with a diabetic ulcer from different patients. A novel Deep Convolutional Neural Network, DFU_QUTNet, is also proposed for the automatic classification of normal skin (healthy skin) class versus abnormal skin (DFU) class. Stacking more layers to a traditional Convolutional Neural Network to make it very deep does not lead to better performance, instead leading to worse performance due to the gradient. Therefore, our proposed DFU_QUTNet network is designed based on the idea of increasing the width of the network while keeping the depth compared to the state-of-the-art networks. Our network has been proven to be very beneficial for gradient propagation, as the error can be back-propagated through multiple paths. It also helps to combine different levels of features at each step of the network. Features extracted by DFU_QUTNet network are used to train Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) classifiers. For the sake of comparison, we have fine-tuned then re-trained and tested three pre-trained deep learning networks (GoogleNet, VGG16, and AlexNet) for the same task. The proposed DFU_QUTNet network outperformed the state-of-the-art CNN networks by achieving the F1-score of 94.5%.
Article
Chronic wounds significantly impact quality of life. They can rapidly deteriorate and require close monitoring of healing progress. Image-based wound analysis is a way of objectively assessing the wound status by quantifying important features that are related to healing. However, high heterogeneity of the wound types and imaging conditions challenge the robust segmentation of wound images. We present Detect-and-Segment (DS), a deep learning approach to produce wound segmentation maps with high generalization capabilities. In our approach, dedicated deep neural networks detected the wound position, isolated the wound from the perturbing background, and computed a wound segmentation map. We tested this approach on a diabetic foot ulcers data set and compared it to a segmentation method based on the full image. To evaluate its generalizability on out-of-distribution data, we measured the performance of the DS approach on 4 additional independent data sets, with larger variety of wound types from different body locations. The Matthews’ correlation coefficient (MCC) improved from 0.29 (full image) to 0.85 (DS) on the diabetic foot ulcer data set. When the DS was tested on the independent data sets, the mean MCC increased from 0.17 to 0.85 . Furthermore, the DS enabled the training of segmentation models with up to 90% less training data without impacting the segmentation performance. The proposed DS approach is a step towards automating wound analysis and reducing efforts to manage chronic wounds.
Chapter
Diabetic foot ulcer classification systems use the presence of wound infection (bacteria present within the wound) and ischaemia (restricted blood supply) as vital clinical indicators for treatment and prediction of wound healing. Studies investigating the use of automated computerised methods of classifying infection and ischaemia within diabetic foot wounds are limited due to a paucity of publicly available datasets and severe data imbalance in those few that exist. The Diabetic Foot Ulcer Challenge 2021 provided participants with a more substantial dataset comprising a total of 15,683 diabetic foot ulcer patches, with 5,955 used for training, 5,734 used for testing and an additional 3,994 unlabelled patches to promote the development of semi-supervised and weakly-supervised deep learning techniques. This paper provides an evaluation of the methods used in the Diabetic Foot Ulcer Challenge 2021, and summarises the results obtained from each network. The best performing network was an ensemble of the results of the top 3 models, with a macro-average F1-score of 0.6307.
Article
Background and objectives: Bedridden patients presenting chronic skin ulcers often need to be examined at home. Healthcare professionals follow the evolution of the patients' condition by regularly taking pictures of the wounds, as different aspects of the wound can indicate the healing stages of the ulcer, including depth, location, and size. The manual measurement of the wounds' size is often inaccurate, time-consuming, and can also cause discomfort to the patient. In this work, we propose the Automatic Skin Ulcer Region Assessment ASURA framework to accurately segment the wound and automatically measure its size. Methods: ASURA uses an encoder/decoder deep neural network to perform the segmentation, which detects the measurement ruler/tape present in the image and estimates its pixel density. Results: Experimental results show that ASURA outperforms the state-of-the-art methods by up to 16% regarding the Dice score, being able to correctly segment the wound with a Dice score higher than 90%. ASURA automatically estimates the pixel density of the images with a relative error of 5%. When using a semi-automatic approach, ASURA was able to estimate the area of the wound in square centimeters with a relative error of 14%. Conclusions: The results show that ASURA is well-suited for the problem of segmenting and automatically measuring skin ulcers.
Conference Paper
Image segmentation is a common goal in many medical applications, as its use can improve diagnostic capability and outcome prediction. In order to assess the wound healing rate in diabetic foot ulcers, some parameters from the wound area are measured. However, heterogeneity of diabetic skin lesions and the noise present in images captured by digital cameras make wound extraction a difficult task. In this work, a Deep Learning based method for accurate segmentation of wound regions is proposed. In the proposed method, input images are first processed to remove artifacts and then fed into a Convolutional Neural Network (CNN), producing a probability map. Finally, the probability maps are processed to extract the wound region. We also address the problem of removing some false positives. Experiments show that our method can achieve high performance in terms of segmentation accuracy and Dice index.