Table 3 - uploaded by Joao Manuel R. S. Tavares
Content may be subject to copyright.
-Comparison of the reviewed segmentation methods for skin lesions, both in macroscopic and dermoscopy images.

-Comparison of the reviewed segmentation methods for skin lesions, both in macroscopic and dermoscopy images.

Source publication
Article
Full-text available
Background and objectives: Because skin cancer affects millions of people worldwide, computational methods for the segmentation of pigmented skin lesions in images have been developed in order to assist dermatologists in their diagnosis. This paper aims to present a review of the current methods, and outline a comparative analysis with regards to...

Context in source publication

Context 1
... automatically defining the initial contours, mainly to be used with the active contour model [7,63,67]. Table 3 allows the performance comparison of the methods reviewed to segment both macroscopic and dermoscopy images of skin lesions, which are mostly performed automatically. The segmentation results are compared against the ground-truth defined by one or more specialists, or their quality has been visually assessed. ...

Similar publications

Article
Full-text available
One of the fastest growing cancers in human cell is skin cancer. Initially starts from the outer layer of human body and spreads unevenly all over by increasing in diameter. The formation of skin cancer depends on the weakness of skin cell. To find the perfect diagnosis, dermatologist should use the computational method. The identification of skin...

Citations

... Therefore, these techniques require pre-processing of the images to reduce the effect of artifacts. Some of the commonly used pre-processing techniques include illumination correction, artifact removal, and contrast enhancement [14]. However, pre-processing not only increases the computational cost of segmentation but also causes a blurring effect and significant loss of fine structure details of the images, resulting in poor overall segmentation accuracy. ...
Article
Full-text available
The automatic segmentation of skin lesions in dermoscopic images is a challenging task due to the presence of artifacts, small lesion sizes, and low contrast between lesions and non-lesion regions. Deep learning models such as U-Net have been used for accurate segmentation in general. Still, their success rate is limited in the case of dermoscopic images due to these challenges. In this paper, we propose a U-Net-based model for efficient and effective segmentation of skin lesions in dermoscopic images. The proposed model, called Attention Residual U-Net with a modified decoder (ARU-Net-MD), employs an encoder-decoder architecture with residual learning, attention gates, and a modified decoder with a combined loss function to achieve higher accuracy for the semantic segmentation of dermoscopic images. Residual learning allows for an efficient model with fewer parameters, while attention gates highlight important features, and the modified decoder with a combined loss function assists the learning process and enables the model to learn more semantic information during training. We evaluated our model on four publicly available datasets, PH², ISIC 2016, ISIC 2017, and ISIC 2018, and observed an accuracy of 0.96, 0.97, 0.95, and 0.96, respectively, outperforming other state-of-the-art skin lesion segmentation models.
... DL has evolved into a common technique for creating networks skillful in successfully modeling higher-order systems to perform human-like performance. [2][3][4][5] Today, numerous studies have been performed for COVID-19 detection using the DL method of medical images and showed noteworthy results. [1,[6][7][8] Nevertheless, few studies for semantically segmenting medical images of COVID-19 patients were published recently. ...
... [1,[6][7][8] Nevertheless, few studies for semantically segmenting medical images of COVID-19 patients were published recently. [3,6] So, in this study, we aimed to use DL methods to segment CT-scan images to diagnose COVID-19 disease by processing chest CT-scan images. ...
Article
Full-text available
A BSTRACT Background Artificial intelligence (AI) techniques have been ascertained useful in the analysis and description of infectious areas in radiological images promptly. Our aim in this study was to design a web-based application for detecting and labeling infected tissues on CT (computed tomography) lung images of patients based on the deep learning (DL) method as a type of AI. Materials and Methods The U-Net architecture, one of the DL networks, is used as a hybrid model with pre-trained densely connected convolutional network 121 (DenseNet121) architecture for the segmentation process. The proposed model was constructed on 1031 persons’ CT-scan images from Ibn Sina Hospital of Iran in 2021 and some publicly available datasets. The network was trained using 6000 slices, validated on 1000 slices images, and tested against the 150 slices. Accuracy, sensitivity, specificity, and area under the receiver operating characteristics (ROC) curve (AUC) were calculated to evaluate model performance. Results The results indicate the acceptable ability of the U-Net-DenseNet121 model in detecting COVID-19 abnormality (accuracy = 0.88 and AUC = 0.96 for thresholds of 0.13 and accuracy = 0.88 and AUC = 0.90 for thresholds of 0.2). Based on this model, we developed the “Imaging-Tech” web-based application for use at hospitals and clinics to make our project’s output more practical and attractive in the market. Conclusion We designed a DL-based model for the segmentation of COVID-19 CT scan images and, based on this model, constructed a web-based application that, according to the results, is a reliable detector for infected tissue in lung CT-scans. The availability of such tools would aid in automating, prioritizing, fastening, and broadening the treatment of COVID-19 patients globally.
... Moreover, removing all potential biases is often very hard, if not impossible, especially in computer vision. One of the applications in which the problem of artifact removal has been analyzed for many years is the issue of analyzing skin lesions for possible cancer detection 9,10 . Also, in other fields and applications, this problem is actively analyzed by many researchers. ...
Preprint
Full-text available
The paper proposes a new and effective bias mitigation method called Targeted Data Augmentation (TDA). Since removingbiases is a tedious, always difficult and, on the other hand, not necessarily an effective approach the authors propose toskillfully insert them, instead. To show the efficiency and to validate the proposed approach, two representative and verydiverse datasets: the dataset of clinical skin lesions and the dataset of male and female faces, were selected to serve as thebenchmarks. The existing biases were first manually examined, identified, and annotated. Then, the use of CounterfactualBias Insertion, has provided the confirmation that the biases like the frame, ruler, and glasses, strongly affect the models. Tomake the models more robust against them, Targeted Data Augmentation was used: in short, the samples were modifiedduring training by randomly inserting biases. The proposed method resulted in a significant decrease in bias measures, morespecifically, from a two-fold to more than 50-fold improvement after training with TDA, with a negligible increase in the error rate.
... Medical imaging is a fundamental component of modern health care, as exemplified by the widespread development of computational analysis algorithms for clinical diagnosis (Oliveira et al 2016, Gómez-Flores et al 2020. Widely employed imaging modalities in clinical practice include x-ray, computed tomography (CT), and magnetic resonance imaging (MRI) (Kasban et al 2015). ...
Article
Full-text available
Objective: Recognizing the most relevant seven organs in an abdominal computed tomography (CT) slice requires sophisticated knowledge. This study proposed automatically extracting relevant features and applying them in a content-based image retrieval (CBIR) system to provide similar evidence for clinical use. Approach: A total of 2827 abdominal CT slices, including 638 liver, 450 stomach, 229 pancreas, 442 spleen, 362 right kidney, 424 left kidney and 282 gallbladder tissues, were collected to evaluate the proposed CBIR in the present study. Upon fine-tuning, high-level features used to automatically interpret the differences among the seven organs were extracted via deep learning architectures, including DenseNet, Vision Transformer (ViT), and Swin Transformer v2 (SwinViT). Three images with different annotations were employed in the classification and query. Main results: The resulting performances included the classification accuracy (94%-99%) and retrieval result (0.98-0.99). Considering global features and multiple resolutions, SwinViT performed better than ViT. ViT also benefited from a better receptive field to outperform DenseNet. Additionally, the use of hole images can obtain almost perfect results regardless of which deep learning architectures are used. Significance: The experiment showed that using pretrained deep learning architectures and fine-tuning with enough data can achieve successful recognition of seven abdominal organs. The CBIR system can provide more convincing evidence for recognizing abdominal organs via similarity measurements, which could lead to additional possibilities in clinical practice.
... The most popular application is medical use, especially in the discrimination and detection of skin cancer. [14][15][16][17][18][19][20] There have been reports on makeup, for example, facial attractiveness evaluation, automatic makeup generation, makeup pattern suggestion. [21][22][23][24] We hypothesized that deep learning technology could be used to obtain a makeup finish evaluation technology that can evaluate subtle textures as well as human visual evaluation. ...
Article
Full-text available
Background Skin color and texture play a significant role in influencing impressions. To understand the influence of skin appearance and to develop better makeup products, objective evaluation methods for makeup finish have been explored. This study aims to apply machine learning technology, specifically deep neural network (DNN), to accurately analyze and evaluate delicate and complex cosmetic skin textures. Methods “Skin patch datasets” were extracted from facial images and used to train a DNN model. The advantages of using skin patches include retaining fine texture, eliminating false correlations from non‐skin features, and enabling visualization of the inferred results for the entire face. The DNN was trained in two ways: a classification task to classify skin attributes and a regression task to predict the visual assessment of experts. The trained DNNs were applied for the evaluation of actual makeup conditions. Results In the classification task training, skin patch‐based classifiers for age range, presence or absence of base makeup, formulation type (powder/liquid) of the applied base makeup, and immediate/while after makeup application were developed. The trained DNNs on regression task showed high prediction accuracy for the experts’ visual assessment. Application of DNN to the evaluation of actual makeup conditions clearly showed appropriate evaluation results in line with the appearance of the makeup finish. Conclusion The proposed method of using DNNs trained on skin patches effectively evaluates makeup finish. This approach has potential applications in visual science research and cosmetics development. Further studies can explore the analysis of different skin conditions and the development of personalized cosmetics.
... Images may be taken by family members, friends, physicians, nurses, or others in the community, and variability in image quality is a significant contributing factor to accuracy of ML algorithms. Images angles, zoom, sharpness, lighting exposure, color balance, and other quality characteristics may pose challenges to differentiate true lesion texture with artificial components of an image 31,45 . ...
... Pertaining to image quality, another limitation is the analysis of a three-dimensional structure in a two-dimensional image 28 . Lesions exist in all areas of the skin, and areas of curvature prevent even light exposure and can pose challenges for CNN algorithms 45 . ...
Article
Full-text available
Background: Artificial intelligence (AI) is increasingly investigated for use in dermatologic conditions. We review recent literature on AI, its potential application for pediatric dermatology, and its impact on the underserved community. Objective: To evaluate the current state of AI in dermatology and its application to pediatric patients. Methods: Literature search was performed in PubMed and Google Scholar using the following key terms in combination with "pediatric", and "dermatology": "artificial intelligence," "AI," "machine learning," "augmented intelligence," "neural network," and "deep learning". Results: Current research is based on images from adult databases, with minimal delineation of patient age. Most literature on AI and dermatologic conditions pertains to melanoma and non-melanoma skin cancers, reporting accuracy from 67-99%. Other commonly studied diseases include psoriasis, acne vulgaris, onychomycosis, and atopic dermatitis, having varying accuracy, sensitivity, and specificity. A recently developed AI algorithm for diagnosis of infantile hemangioma found 91.7% accuracy. AI may be a means to increase access to pediatric dermatologic care, yet challenges remain for its use in underserved communities. Conclusion: Literature on AI systems for dermatologic diseases continues to grow. Further research may tailor AI algorithms for pediatric patients and those of diverse skin color to decrease algorithm bias and increase diagnostic accuracy.
... In particular, DL systems can process complex, high-dimensional data such as images [16,17]. In recent years, ML/DL applications have increased exponentially as a diagnostic aid in dermatology [18,19]. Methods for the analysis and classification of dermatological lesions may involve steps such as image acquisition, pre-processing, segmentation, feature extraction and lesion classification [5,20]. ...
Article
Full-text available
Leprosy is a neglected tropical disease that can cause physical injury and mental disability. Diagnosis is primarily clinical, but can be inconclusive due to the absence of initial symptoms and similarity to other dermatological diseases. Artificial intelligence (AI) techniques have been used in dermatology, assisting clinical procedures and diagnostics. In particular, AI-supported solutions have been proposed in the literature to aid in the diagnosis of leprosy, and this Systematic Literature Review (SLR) aims to characterize the state of the art. This SLR followed the preferred reporting items for systematic reviews and meta-analyses (PRISMA) framework and was conducted in the following databases: ACM Digital Library, IEEE Digital Library, ISI Web of Science, Scopus, and PubMed. Potentially relevant research articles were retrieved. The researchers applied criteria to select the studies, assess their quality, and perform the data extraction process. Moreover, 1659 studies were retrieved, of which 21 were included in the review after selection. Most of the studies used images of skin lesions, classical machine learning algorithms, and multi-class classification tasks to develop models to diagnose dermatological diseases. Most of the reviewed articles did not target leprosy as the study’s primary objective but rather the classification of different skin diseases (among them, leprosy). Although AI-supported leprosy diagnosis is constantly evolving, research in this area is still in its early stage, then studies are required to make AI solutions mature enough to be transformed into clinical practice. Expanding research efforts on leprosy diagnosis, coupled with the advocacy of open science in leveraging AI for diagnostic support, can yield robust and influential outcomes.
... To extract it, the original red, green, and blue channels were first transformed into hue, saturation, and value channels. 39 Then, only the hue was kept for the subsequent feature extraction and combination. 40 ...
Article
Full-text available
Background Acute stroke is the leading cause of death and disability globally, with an estimated 16 million cases each year. The progression of carotid stenosis reduces blood flow to the intracranial vasculature, causing stroke. Early recognition of ischemic stroke is crucial for disease treatment and management. Purpose A computer‐aided diagnosis (CAD) system was proposed in this study to rapidly evaluate ischemic stroke in carotid color Doppler (CCD). Methods Based on the ground truth from the clinical examination report, the vision transformer (ViT) features extracted from all CCD images (513 stroke and 458 normal images) were combined in machine learning classifiers to generate the likelihood of ischemic stroke for each image. The pretrained weights from ImageNet reduced the time‐consuming training process. The accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve were calculated to evaluate the stroke prediction model. The chi‐square test, DeLong test, and Bonferroni correction for multiple comparisons were applied to deal with the type‐I error. Only p values equal to or less than 0.00125 were considered to be statistically significant. Results The proposed CAD system achieved an accuracy of 89%, a sensitivity of 94%, a specificity of 84%, and an area under the receiver operating characteristic curve of 0.95, outperforming the convolutional neural networks AlexNet (82%, p < 0.001), Inception‐v3 (78%, p < 0.001), ResNet101 (84%, p < 0.001), and DenseNet201 (85%, p < 0.01). The computational time in model training was only 30 s, which would be efficient and practical in clinical use. Conclusions The experiment shows the promising use of CCD images in stroke estimation. Using the pretrained ViT architecture, the image features can be automatically and efficiently generated without human intervention. The proposed CAD system provides a rapid and reliable suggestion for diagnosing ischemic stroke.
... While using a segmentation-based approach can be reliable and effective in identifying lesions in medical images [57][58][59][60][61], we chose not to implement it for several reasons. Our main objective was multiclass classification, and we discovered that we were able to achieve accurate results without the need for segmentation. ...
Article
Full-text available
Skin lesion classification plays a crucial role in dermatology, aiding in the early detection, diagnosis, and management of life-threatening malignant lesions. However, standalone transfer learning (TL) models failed to deliver optimal performance. In this study, we present an attention-enabled ensemble-based deep learning technique, a powerful, novel, and generalized method for extracting features for the classification of skin lesions. This technique holds significant promise in enhancing diagnostic accuracy by using seven pre-trained TL models for classification. Six ensemble-based DL (EBDL) models were created using stacking, softmax voting, and weighted average techniques. Furthermore, we investigated the attention mechanism as an effective paradigm and created seven attention-enabled transfer learning (aeTL) models before branching out to construct three attention-enabled ensemble-based DL (aeEBDL) models to create a reliable, adaptive, and generalized paradigm. The mean accuracy of the TL models is 95.30%, and the use of an ensemble-based paradigm increased it by 4.22%, to 99.52%. The aeTL models’ performance was superior to the TL models in accuracy by 3.01%, and aeEBDL models outperformed aeTL models by 1.29%. Statistical tests show significant p-value and Kappa coefficient along with a 99.6% reliability index for the aeEBDL models. The approach is highly effective and generalized for the classification of skin lesions.
... One of the applications in which the problem of artifact removal has been analyzed for many years is the issue of analyzing skin lesions for possible cancer detection. [1,22]. Also in other fields and applications, this problem is actively analyzed by many researchers. ...
Preprint
Full-text available
The development of fair and ethical AI systems requires careful consideration of bias mitigation, an area often overlooked or ignored. In this study, we introduce a novel and efficient approach for addressing biases called Targeted Data Augmentation (TDA), which leverages classical data augmentation techniques to tackle the pressing issue of bias in data and models. Unlike the laborious task of removing biases, our method proposes to insert biases instead, resulting in improved performance. To identify biases, we annotated two diverse datasets: a dataset of clinical skin lesions and a dataset of male and female faces. These bias annotations are published for the first time in this study, providing a valuable resource for future research. Through Counterfactual Bias Insertion, we discovered that biases associated with the frame, ruler, and glasses had a significant impact on models. By randomly introducing biases during training, we mitigated these biases and achieved a substantial decrease in bias measures, ranging from two-fold to more than 50-fold, while maintaining a negligible increase in the error rate.