Conference PaperPDF Available

Skin Cancer Prediction using Machine Learning and Neural Networks

Authors:

Abstract and Figures

The apparent similarities between skin conditions have made medical diagnosis difficult. Although melanoma is the most well-known type of skin cancer, other diseases have recently been responsible for a large number of fatalities. One of the biggest challenges in creating a dependable automatic categorization system is the absence of massive data. A deep learning (DL) system for identifying skin cancer is presented in this paper. The rapid development rate of melanoma skin cancer, its massive price of surgery, and its mortality rate have all heightened the need for timely diagnosis of skin disease. Most of the time, treating cancer cells requires time and careful detection. The commitment to deep learning powered by machine learning (ML) has been repeatedly shown in the medical sector. Skin cancer categorization has benefited significantly from growing study attention since it is amenable to visual pattern recognition. Studies have revealed that DL-based image classifications may be utilised to enhance skin cancer diagnosis or are on par with or even outperform human specialists. In this study, we provide a deep learning architecture that can identify skin cancer. Five state-of-the-art convolutional neural networks were trained using transfer learning to provide a simple classifier and a hierarchical (with two stages) classification that can differentiate between seven different species of moles. Experiments were conducted using data from the HAM10000 database, a huge collection of dermatoscopic pictures, with the use of data augmentation methods to boost results. The DenseNet201 network performed well in this experiment, as seen by the high classification accuracies and F-measures achieved with very few false negatives. The simple model outperformed the 2-levels model, with the best result coming from the first level, or a binary classification between nevi and non-nevi.
Content may be subject to copyright.
2022 5th International Conference on Contemporary Computing and Informatics (IC3I)
271
979-8-3503-9826-7/22/$31.00 ©2022 IEEE
Skin Cancer Prediction using Machine Learning and
Neural Networks
Dr. Neha Tyagi
Associate Professor
Department of CSE
Amity University
Greater Noida
Uttar Pradesh, India.
nehacs1988@gmail.com
Dr. Bhasker Pant
Professor Department of Computer
Science & Engineering
Graphic Era Deemed to be University
Dehradun, Uttarakhand, India.
bhasker.pant@geu.ac.in
Dr. Logeshwari Dhavamani
Associate Professor
Department of Information Technology
St. Joseph’s College of Engineering
Chennai, India.
logeshgd@gmail.com
Mr. Dilip Kumar Jang Bahadur Saini
Assistant Professor
Department of Computer Science and
Engineering
Himalayan School of Science and
Technology
Swami Rama Himalayan University
Swami Ram Nagar, Jolly Grant
Dehradun, Uttarakhand, India
dilipsaini@gmail.com
Dr. Mohammed Saleh Al Ansari
Associate Professor
Department of Chemical Engineering
College of Engineering
University of Bahrain
Bahrain, UAE.
Malansari.uob@gmail.com
Joshuva Arockia Dhanraj
Department of Automation and Robotics
(ANRO)
Department of Mechatronics Engineering
Hindustan Institute of Technology and
Science
Padur, Chennai, India
joshuva1991@gmail.com
Abstract
The apparent similarities between skin
conditions have made medical diagnosis difficult. Although
melanoma is the most well-known type of skin cancer, other
diseases have recently been responsible for a large number of
fatalities. One of the biggest challenges in creating a
dependable automatic categorization system is the absence of
massive data. A deep learning (DL) system for identifying skin
cancer is presented in this paper. The rapid development rate
of melanoma skin cancer, its massive price of surgery, and its
mortality rate have all heightened the need for timely
diagnosis of skin disease. Most of the time, treating cancer
cells requires time and careful detection. The commitment to
deep learning powered by machine learning (ML) has been
repeatedly shown in the medical sector. Skin cancer
categorization has benefited significantly from growing study
attention since it is amenable to visual pattern recognition.
Studies have revealed that DL-based image classifications
may be utilised to enhance skin cancer diagnosis or are on par
with or even outperform human specialists. In this study, we
provide a deep learning architecture that can identify skin
cancer. Five state-of-the-art convolutional neural networks
were trained using transfer learning to provide a simple
classifier and a hierarchical (with two stages) classification
that can differentiate between seven different species of moles.
Experiments were conducted using data from the HAM10000
database, a huge collection of dermatoscopic pictures, with
the use of data augmentation methods to boost results. The
DenseNet201 network performed well in this experiment, as
seen by the high classification accuracies and F-measures
achieved with very few false negatives. The simple model
outperformed the 2-levels model, with the best result coming
from the first level, or a binary classification between nevi and
non-nevi.
Index Terms Deep Learning, Medical, cancer, Machine
Learning (ML), Neural Network, Melanoma.
I. INTRODUCTION
"Convolutional neural networks (CNNs)", the
fundamental component to most various DL-based image
classifications, have been continuously improved, and this
achievement can in part be credited to them. Furthermore,
despite their allegedly comprehensive effectiveness, CNN-
related image classifications have broader problems,
including the ability to learn erroneous correlations,
sensitivity to slight picture alterations, and adversarial
weaknesses. A common underlying issue behind several of
these errors is shortcut learning. Correlated in spite of
meaningless properties which exist in reliable data set have
been typically learned by the classifier, as opposed to learning
reliable decision criteria that classify to "out-of-sample
(OOS)" information, for example real physical features of
nevi as well as melanomas (i.e. shortcuts). This frequently
leads to non-generalizable and weak classifiers [1]. Even if
the picture alterations are just slight, these classifiers
typically perform well on assessing information which is
comparable with the training derived data but poorly or fail
on OOS data. (For instance, minor picture rotations or
brightness changes).
Naturally, classifiers of skin cancer also display possible
signs of accelerated learning, like the acquisition of artefacts,
antipathetic weaknesses, and normal brittleness. Therefore,
OOS testing of classifiers of skin cancer ought to become
commonplace. OOD testing is undoubtedly feasible provided
the range of thermoscopic information. Though this
information come from many sores and ostensibly show wide
range of image creating modalities, they could not
significantly test classifiers [2]. For instance, the well-known
ImageNet data set has undergone several adjustments to
evaluate a reaction of classifiers to significant shifts of
distribution in general object recognition. Such difficult test
sets are till now lacking in the dermatology, though
II. LITERATURE REVIEW
2022 5th International Conference on Contemporary Computing and Informatics (IC3I) | 979-8-3503-9826-7/22/$31.00 ©2022 IEEE | DOI: 10.1109/IC3I56241.2022.10073141
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 26,2023 at 04:30:47 UTC from IEEE Xplore. Restrictions apply.
2022 5th International Conference on Contemporary Computing and Informatics (IC3I)
272
One of the most common cancers detected in the US is
malignant melanoma. The most serious form of skin cancer,
melanoma, has recently emerged as one of the biggest
problems facing the public health system. 2018 is expected to
see 91270 new instances of melanoma identified in the
United States, based on the most recent figures. Over the
coming decades, it is anticipated that both the prevalence of
melanoma and the mortality it causes will increase.
According to a recent analysis, the yearly rise in new
melanoma case diagnoses from 2008 to 2018 was 53%.
Survival statistics for this form of cancer are quite
encouraging if it can be detected early on and treated
appropriately [3]. If not, a patient's expected 5-year survival
rate will drop from 99% to 14%. Between 1994 and 2014,
there was a sharp rise in the detection of new instances of
non-melanoma cancer, up to 77%. The most prevalent non-
melanoma skin cancer, basal cell carcinoma, claims the lives
of 3000 individuals annually.
Without using any picture preparation techniques, deep
learning has been used to tackle exceedingly complicated
categorization and segmentation problems. In order to
comprehend the various lesions, design of these networksis
majorly built on different kinds of convolutional layers,
which analyse as well as retrieve key characteristics from the
pictures. For instance, several modalities of pictures have
been utilised to discover the characteristics that identify
dementia patients [4]. Known as "Convolutional Neural
Networks (CNNs)", they have been extensively utilised and
exhibit excellent performance in the analysis of images and
videos. Today's CNN's take advantage of GPU computing
power to do a high number of operations in a matter of
seconds, processing huge datasets to build a solid model for
use in object identification and segmentation, decision
support systems, and picture classification. Concentrating on
diagnostic imaging, deep networks have demonstrated
excellent effectiveness in medical image processing with the
growth of publicly available resources [5]. In ultrasonic
elastography, neural networks were previously provided with
additional, confidential information to do strain
reconstructing. Deep learning methods have also been used
to interpret blood circulation from angiographies and identify
vessel boundaries. There is always a need for improvement,
however, new efforts in the classification of skin lesions have
been presented [6]. Due to the two-stage methodology used
in this research, deep networks may be used to segment data,
extract characteristics, and then predict outcomes.
Additionally, the majority of them concentrate on the two
classes that are troublesome, and other skin disorders are
typically lumped together into one class rather than being
categorised. This research aims to automate the classification
of various mole categories without the need for human
interaction. The usage of deep learning networks in an end-
to-end solution that the user will already have learned without
the requirement for parameter adjustment to identify skin
conditions is presented [7, 8]. In this study, transfer learning
is used to assess how well state-of-the-art previously trained
deep networks identify melanoma. In this area, the renowned
HAM10000 dataset—is widely used as a training reference
standard for dermatology utilized for the trials. And over half
of the photos in this dataset, which has more than 10,000
images divided over seven classes, corresponding to the nevi
class, making the classification assignment more difficult.
The functionality of convolutional neural networks has
evolved enough to enable the creation of automated non-
assisted algorithms that are utilised in various sectors, such as
surveillance footage or automated vehicles, and they have
evolved into crucial tools for object detection, categorization,
and identification. Due to the technological constraints of the
pictures, there are several jobs performed by radiologists and
clinicians in the field of medical imaging that require
assistance to enhance their diagnosis [9]. The purpose of this
research is to leverage the strength of deep learning with
photos to help doctors identify and categorise melanoma.
Machine learning algorithms' primary presumption is that
data must have shared properties and a comparable
distribution. Therefore, deep learning techniques suffer when
heterogeneity appears and must be modified and retrained
from scratch using new additional datasets [10].
Although, in most cases, this technique is not feasible
because of a lack of resources, including the availability of
images or a sufficient budget to cover the costs. In certain
circumstances, the well-known method of transfer learning is
useful, enabling one to retrain an existing effective model to
customise it to a particular issue. A mix of two well-known
deep networks called Inception-ResNetV2 aims to gain from
Reset’s leftover connections by speeding up the Inception
network's training [11]. To expand the multidimensional
created by the Inception block and then before implementing
the characterizations, specifically, more basic Inception
blocks than just the actual one is utilised. This 164-layer
"convolutional neural network" was introduced at the 2015
ILSVRC assignment with the goal of enhancing the efficiency
of the ILSVRC 2012 categorization job. In contrast to the
earlier networks, MobileNetV2 is a transportable neural
network that is geared for a variety of tasks and standards
while being tailored for resource-constrained contexts. The
inverted residual along with a linear bottleneck, a method
which gets rid of non-linearities while maintaining power of
representation, is the fundamental innovation of this network.
The MobileNetV2 infrastructure consists of totally 53 layers,
the first of which is complete convolution layer and is
implemented by 19 residual bottleneck levels. The network
uses an expansion rate that is constant. Datasets from
ImageNet, COCO, and VOC were used to test the models
III. RESEARCH METHODOLOGY
The methodical inquiry and analysis of sources and
methods to gather findings and draw new conclusions can be
summed up as research. Research technique, which has been
defined as a method to methodically address the research
issue, dictates how such an inquiry will be conducted. Study
of all kinds is primarily founded on a set of underlying
assumptions about what makes research study, so it is
essential to apply the right technique to accomplish research
goals to assure the validity of the results. There is no one
methodology that works for all research and find; rather, the
technique must be chosen depending on the nature, extent, and
type of data that are relevant to the research question.
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 26,2023 at 04:30:47 UTC from IEEE Xplore. Restrictions apply.
2022 5th International Conference on Contemporary Computing and Informatics (IC3I)
273
Secondary and empirical analysis has been done based on
research topic to perform the analysis section.
IV. DATA ANALYSIS AND DATA AUGMENTATION
The actual outcomes and evaluation of the suggested
methodologies have been done using the HAM10000 dataset,
which is freely accessible. 10,050 dermoscopic pictures from
seven distinct classes—"actinic keratosis (akiec)”, “benign
keratosis (bkl)”, “nevi (nv)”, “melanoma (mel)”, “basal cell
carcinoma (bcc)”, “dermatofibroma (df)’ and“vascular
skin"—make up the HAM10000 collection (vasc).
Regardless of the number of categorization difficulties, this
dataset has been regularly utilised as a baseline for comparing
people and robots. The 10,015 dermoscopic photos were
gathered more than20 years from dermatology department in
the "Medical University of Vienna (Austria)" as well as skin
cancer clinic of "Cliff Rosendahl in Queensland (Australia)".
Unequal distribution of the information is this dataset's first
significant flaw. The nevi class has over 7000 photos, whilst
the other classes only have 1000 or so. This could prompt the
network to focus on photos of benign keratoses and other
conditions that resemble nevi. There are also few
photographs of dermatofibromas or vascular skin. Because
there are significantly fewer photographs in the test set than
in other classes, it is important to carefully assess how the test
set performed [12]. Therefore, the skewed dispersion in the
training stage must be balanced by the application of
augmentation techniques. This data augmentation, which was
used throughout the training phase, was carried out utilising
various revolutions and reflections of the original pictures.
The selected “data augmentation method” were explored in
the following:
Flipping horizontally with a 0.5 probability.
A further likelihood of 0.5 for vertical flipping.
Rotations of images with a frequency of 0.75 and an
arbitrary angle between [90, 90].
The 7:1 proportion between nevi as well as other skin
lesions can be mitigated but not eliminated by data
augmentation. In order to enhance the categorization of the
non-nevi pictures, the 2-stages model was applied after the
dataset was initially balanced.
Figure 1: “Class distribution of the HAM10000 dataset”
(Source: [12])
Analysis of linear regression has been utilised to detect
the main causes related to skin cancer. A basic regression
consists of a predictor variable as well as a responder
variable. Analysis of multiple regression consists of a
predictor variable and several independent factors. Following
is the formulation.
(1)
Wherein, ,..., are the relevant factors and y is the
dependent variable. ε is the stochastic error element, and
is the coefficient of determination. The y intercept is .
Regression measures including error of absolute mean,
error in root mean square, “relative square error”, “relative
absolute error” and determination coefficient. To determine
how closely forecasted results match actual outcomes, the
MAE measure is frequently used. RMSE may be used to
analyse systems with errors that have been calculated in
similar units [13]. “Relative absolute error” may be used to
compare modelling whose defects are assessed over many
units. Error in root square can be compared between diverse
models whose errors have been expressed in different units.
CD provides a summary of the empirical analysis of linear
regression. All those mentioned errors have been calculated
with this formula:

 (2)
wherein is the real value and is the prognosis
 
 ! (3)
 
 !
"
 ! (4)
where " is the mean of
 
 !

 ! (5)
Coefficient of Determination,
##$
##% & ##'
##% (6)
Sum of Squares Regression
 "
( (7)
Sum of Squares Total,
) "
(8)
Sum of Squares Error

(9)
The Two Class Neural Network Unit makes skin cancer
predictions possible. This unit can be used to predict a goal
with just two potential values. The NN network consists of
13 inputs, 1 convolution layer, and 1 output. The error in
mean squares utilised to assess the model's performance
(MSE). MSE has been measured using Eq. (10). The MSE is
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 26,2023 at 04:30:47 UTC from IEEE Xplore. Restrictions apply.
2022 5th International Conference on Contemporary Computing and Informatics (IC3I)
274
low when the NN model is doing well. NN model employs a
hidden layer comprising an MSE of 0.2675.
 *+, ,-./0
1

2
, (10)
Where P is the number of components in the outcome
that possess it.
The quantity of observation is N.
The intended outputs are +,
The real outputs are ,
The options for the Train Model Module include learning
algorithm 0.1, training iterations 100, learning basics
weighting 0.1, as well as normalizer type (min-max).
Every attribute is linearly rescaled to [0, 1] range by the
“min-max normalizer”. Eq. (11) is used to compute the
expectation maximization normalizer.
3 456784
95:;456784< (11)
It is feasible to predict a class-wide outcome as well as
the chance that it will occur using the Score Model Module.
In this area, score labels and score options are shown. The
Scored Likelihoods show that the likelihood of getting skin
cancer rises the closer researchers get to zero. Contrarily, as
scientists get closer to one, the probability of skin cancer
declines. The reliability, specificity, recall, and F1 measure
of the NN model for true positives (TP) and false positives
(FP), true negatives (TN), along with false negatives have
been shown in the Assessment of Model Module (FN). The
data clearly show that TP. The numbers for FP are recognised
incorrectly. The dataset appropriately excludes TN [14, 15,
16]. FN is incorrectly ignored as data. Out of all the accurate
projections, 0.978 are accurate. Performance of NN model is
totally based on precision. The accuracy rate of positively
identified instances is 0.962. 1,000 correctly identified real
positive instances are considered to be a recall. The F1 score's
accuracy and recall averages are 0.981 on a harmonised basis.
The precision, correctness, recall, along with F1 score have
been calculated using Equations (12), (13), (14), and (15).
==>?@= %2A%1
%2A%1AB2AB1 (12)
0?C=DEDFG %2
%2AB1 (13)
C=@HH %2
%2AB1 (14)
I& E=F?C %2
%2AB2AB1 (15)
The main concept is for changing the input value of x ϵ
R1 within a regressive projection onto a high m-dimensional
feature set The SVM then identifies the optimal linear
separation hyperplane in the training dataset, which is
associated with a collection of "vector support points” [17].
A "kernel function" determines the conversion ((x)). SVM
employs the "sequential minimal optimization" studying
technique, which employs the widely used "Gaussian kernel”,
which has fewer variables than various other kernels (such as
polynomial):
“K(X, X’) = exp (-γX X’2) , γ>0”, Two hyper-
parameters influence the performance of the classifier: γ, C is
a penalty component, while K is a kernel criterion. Below is
the conditional SVM outcome:
F(xi) = J8KJ8L8JMDN
O
, (16)
“P(i) = 1/ (1+ exp (Af (xi) + B))” (17)
Whereas” denotes the exact number of authentic
"support vectors", “YiP { -1, 1}” seems to be the binary
classifier outcome, b similarly KJ88seem to be the model's
components, and A and B are model parameters, and they're
obtained by addressing a regularised supervised classification
issue. Whenever “Nc>2”, The method called “one-against
one” is employed, that involves training “Nc (Nc -1)/2”
nearest neighbour coupling to provide the output for the
classification algorithm [20].
A. Sensitivity analysis
The susceptibility assessment is a basic process that
assesses the system behaviours when a particular input is
altered after the development operation. Let Ya.jwith the
exception of the output produced through maintaining
various input parameters at average numbers Xa, that changes
during the course of its full range “(Xa.j, with jP
{1,2,…………..L} levels”. “Variation (Va) of Ya.j as a
measurement of input significance, is employed. If Nc>2
(multiclass), the total of deviations for every outcome class
probability is calculated. (p© a.j). A lot of variation (Va) shows
high level of Xa as a result, the input's proportional
significance (Ra) is provided through:
Ra= Va/ QD R&SS8T
(
 (18)
The "variable effect characteristic (VEC)" curve has now
been developed for a more precise examination, which
depicts the Xa.j characteristics (x axis) against the Ya.j
projections (y axis).
B. “Measurement of performance evolution”
1) “Classification accuracy (ACC)”
The capacity of the algorithm to properly anticipate class
levels of new along with previously unknown data is referred
to as classification results. The proportion (percentage) of
assessment collection samples successfully categorised by
the classification is known as classification results [18]. The
accuracy level can be used to evaluate categorization
performance. That is
“Accuracy (T) = @EECEEU
)V
%
 , tiP T (19)
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 26,2023 at 04:30:47 UTC from IEEE Xplore. Restrictions apply.
2022 5th International Conference on Contemporary Computing and Informatics (IC3I)
275
such that T is considered as the data set that must be
categorised (the sample set in this case), “t P T, t.c” seems to
be the object's category “t” consequently “classify (t)”
providing the categorization of t through the classification
that was utilised (here, SVM and MLPE)”.
The performance development assessment in this section
is mostly dependent on the classification results provided in
Equations (18) and (19).
V. CONCLUSION
Artificial intelligence is making headway swiftly in the
field of dermatology. It can revolutionize clinical outcomes,
particularly by improving the sensitivity and precision of
detection for skin lesions, particularly cancer. However,
medical, and photographic datasets of all skin types are
needed for AI research, and this can only be acquired through
increased global skin imaging coordination. It is necessary to
record the hypersensitivity, specificities, and effectiveness in
future research and in actual environments. AI is not a danger
to dermatologists' knowledge; instead, in the next decades, it
may be employed to enhance clinical practise. If practising
dermatologists have a greater knowledge of AI concepts, they
can get success by delivering reliable skin care. Protecting
health information, having access to huge databases, and
retraining the AI algorithms to improve diagnostic accuracy
are some of the hurdles in implementing AI for the
identification of skin cancer. In this current study, a
Convolutional Neural Networks-based method for
classifying melanoma has been suggested. A method is being
developed to assist individuals and medical professionals in
the detection or classification of benign or malignant skin
cancer types. Additionally, this research provides a brief
overview of the sensitivity analysis and performance
measurements of the convolution neural network model for
improved skin cancer detection accuracy. According to the
experimental and assessment portion, the approach may be
used as a baseline for helping medical practitioners find skin
cancer. Any professional can obtain accurate findings by
collecting a few random photos, but the old technique takes
too long to identify patients accurately.
REFERENCES
[1] Refianti, R., Mutiara, A.B. and Priyandini, R.P., 2019. Classification
of melanoma skin cancer using convolutional neural
network. International Journal of Advanced Computer Science and
Applications, 10(3).
[2] Jinnai, S., Yamazaki, N., Hirano, Y., Sugawara, Y., Ohe, Y. and
Hamamoto, R., 2020. The development of a skin cancer classification
system for pigmented skin lesions using deep
learning. Biomolecules, 10(8), p.1123.
[3] Thurnhofer-Hemsi, K. and Domínguez, E., 2021. A convolutional
neural network framework for accurate skin cancer detection. Neural
Processing Letters, 53(5), pp.3073-3093.
[4] Hasan, M., Barman, S.D., Islam, S. and Reza, A.W., 2019, April. Skin
cancer detection using convolutional neural network. In Proceedings
of the 2019 5th international conference on computing and artificial
intelligence (pp. 254-258).
[5] V. Panwar, D.K. Sharma, K.V.P.Kumar, A. Jain & C. Thakar, (2021),
“Experimental Investigations And Optimization Of Surface Roughness
In Turning Of EN 36 Alloy Steel Using Response Surface
Methodology And Genetic Algorithm” Materials Today: Proceedings,
https://Doi.Org/10.1016/J.Matpr.2021.03.642
[6] A. Jain, A. K. Pandey, (2019), “Modeling And Optimizing Of
Different Quality Characteristics In Electrical Discharge Drilling Of
Titanium Alloy (Grade-5) Sheet” Material Today Proceedings, 18,
182-191. https://doi.org/10.1016/j.matpr.2019.06.292
[7] Maron, R.C., Schlager, J.G., Haggenmüller, S., von Kalle, C., Utikal,
J.S., Meier, F., Gellrich, F.F., Hobelsberger, S., Hauschild, A., French,
L. and Heinzerling, L., 2021. A benchmark for neural network
robustness in skin cancer classification. European Journal of
Cancer, 155, pp.191-199.
[8] Maron, R.C., Schlager, J.G., Haggenmüller, S., von Kalle, C., Utikal,
J.S., Meier, F., Gellrich, F.F., Hobelsberger, S., Hauschild, A., French,
L. and Heinzerling, L., 2021. A benchmark for neural network
robustness in skin cancer classification. European Journal of
Cancer, 155, pp.191-199.
[9] Rezvantalab, A., Safigholi, H. and Karimijeshni, S., 2018.
Dermatologist level dermoscopy skin cancer classification using
different deep learning convolutional neural networks
algorithms. arXiv preprint arXiv:1810.10348.
[10] A. Jain, A.K.Yadav & Y. Shrivastava (2019), “Modelling and
Optimization of Different Quality Characteristics In Electric Discharge
Drilling of Titanium Alloy Sheet” Material Today Proceedings, 21,
1680-1684. https://doi.org/10.1016/j.matpr.2019.12.010
[11] Rezvantalab, A., Safigholi, H. and Karimijeshni, S., 2018.
Dermatologist level dermoscopy skin cancer classification using
different deep learning convolutional neural networks
algorithms. arXiv preprint arXiv:1810.10348.
[12] Vijayalakshmi, M.M., 2019. Melanoma skin cancer detection using
image processing and machine learning. International Journal of
Trend in Scientific Research and Development (IJTSRD), 3(4), pp.780-
784.
[13] Nahata, H. and Singh, S.P., 2020. Deep learning solutions for skin
cancer detection and diagnosis. In Machine Learning with Health Care
Perspective (pp. 159-182). Springer, Cham.
[14] Chaturvedi, S.S., Tembhurne, J.V. and Diwan, T., 2020. A multi-class
skin Cancer classification using deep convolutional neural
networks. Multimedia Tools and Applications, 79(39), pp.28477-
28498.
[15] Zhang Ce, Xin Pan, Huapeng Li, Gardiner A, Sargent I, Jonathon S
Hare, et al. A hybrid MLP-CNN classifier for very fine resolution
remotely sensed image classification. Isprs Journal of Photogrammetry
and Remote Sensing. 2017;140:133-144.
[16] A. Jain, A. K. Pandey, (2019), “Modeling And Optimizing Of
Different Quality Characteristics In Electrical Discharge Drilling Of
Titanium Alloy (Grade-5) Sheet” Material Today Proceedings, 18,
182-191. https://doi.org/10.1016/j.matpr.2019.06.292
[17] A. Jain, A. K. Pandey, (2019), “Multiple Quality Optimizations In
Electrical Discharge Drilling Of Mild Steel Sheet” Material Today
Proceedings, 8, 7252-7261.
https://doi.org/10.1016/j.matpr.2017.07.054
[18] V. Panwar, D.K. Sharma, K.V.P.Kumar, A. Jain & C. Thakar, (2021),
“Experimental Investigations And Optimization Of Surface Roughness
In Turning Of EN 36 Alloy Steel Using Response Surface
Methodology And Genetic Algorithm” Materials Today: Proceedings,
https://Doi.Org/10.1016/J.Matpr.2021.03.642
[19] A. Jain, C. S. Kumar, Y. Shrivastava, (2021), “Fabrication and
Machining of Fiber Matrix Composite through Electric Discharge
Machining: A short review” Material Today Proceedings.
https://doi.org/10.1016/j.matpr.2021.07.288
[20] Bi Lei, Jinman Kim, EuijoonAhn, Feng D. Automatic Skin Lesion
Analysis using Large-scale Dermoscopy Images and Deep Residual
Networks. ArXiv abs. 2017; 1703:04197.
Authorized licensed use limited to: VIT University- Chennai Campus. Downloaded on March 26,2023 at 04:30:47 UTC from IEEE Xplore. Restrictions apply.
... Studies have demonstrated the autonomous identification of malignant melanoma from dermoscopic images with high accuracy using deep learning algorithms [7] [8]. Our research contributes to this evolving landscape, focusing on the application of deep learning models for skin lesion recognition, utilizing the HAM10000 dataset as a benchmark-a dataset that aligns with the quality and scale required for advancing medical practices in smart cities [9] [10]. ...
Article
Full-text available
Smart and sustainable dermatology takes on a new dimension within Green Smart Cities with the integration of artificial intelligence (AI) into dermatological diagnosis. This study explores the success of deep learning models in accurately recognizing skin lesions, focusing on the use of the HAM10000 dataset. Our comparative analysis highlights the crucial impact of network architecture choices, data augmentation, and preprocessing on model performance. The results reveal that models leveraging transfer learning and fine-tuning on pre-trained networks excel in precision, underscoring their relevance in the context of smart green health. We also address opportunities for improvement in model generalization across diverse datasets and skin types. These findings provide a foundation for the development of more accurate skin lesion recognition models aligned with the principles of Green Smart Health, contributing to faster diagnostics, improved patient care, and ultimately, healthier Green Smart Cities. This work opens avenues for future research, such as exploring of the effectiveness of deep learning techniques in diverse health contexts and the integration of clinical data for more personalized dermatological diagnostics within Green Smart Cities.
... Tyagi et al. [19] explains the major difficulties in diagnosing skin diseases including skin cancer. Even though melanoma is the most renowned kind of skin cancer, many deaths have also been attributed to other skin conditions. ...
Article
Full-text available
Background One prominent application for deep learning–based classifiers is skin cancer classification on dermoscopic images. However, classifier evaluation is often limited to holdout data which can mask common shortcomings such as susceptibility to confounding factors. To increase clinical applicability, it is necessary to thoroughly evaluate such classifiers on out-of-distribution (OOD) data. Objective The objective of the study was to establish a dermoscopic skin cancer benchmark in which classifier robustness to OOD data can be measured. Methods Using a proprietary dermoscopic image database and a set of image transformations, we create an OOD robustness benchmark and evaluate the robustness of four different convolutional neural network (CNN) architectures on it. Results The benchmark contains three data sets—Skin Archive Munich (SAM), SAM-corrupted (SAM-C) and SAM-perturbed (SAM-P)—and is publicly available for download. To maintain the benchmark's OOD status, ground truth labels are not provided and test results should be sent to us for assessment. The SAM data set contains 319 unmodified and biopsy-verified dermoscopic melanoma (n = 194) and nevus (n = 125) images. SAM-C and SAM-P contain images from SAM which were artificially modified to test a classifier against low-quality inputs and to measure its prediction stability over small image changes, respectively. All four CNNs showed susceptibility to corruptions and perturbations. Conclusions This benchmark provides three data sets which allow for OOD testing of binary skin cancer classifiers. Our classifier performance confirms the shortcomings of CNNs and provides a frame of reference. Altogether, this benchmark should facilitate a more thorough evaluation process and thereby enable the development of more robust skin cancer classifiers.
Article
Full-text available
Skin diseases have become a challenge in medical diagnosis due to visual similarities. Although melanoma is the best-known type of skin cancer, there are other pathologies that are the cause of many death in recent years. The lack of large datasets is one of the main difficulties to develop a reliable automatic classification system. This paper presents a deep learning framework for skin cancer detection. Transfer learning was applied to five state-of-art convolutional neural networks to create both a plain and a hierarchical (with 2 levels) classifiers that are capable to distinguish between seven types of moles. The HAM10000 dataset, a large collection of dermatoscopic images, were used for experiments, with the help of data augmentation techniques to improve performance. Results demonstrate that the DenseNet201 network is suitable for this task, achieving high classification accuracies and F-measures with lower false negatives. The plain model performed better than the 2-levels model, although the first level, i.e. a binary classification, between nevi and non-nevi yielded the best outcomes.
Article
Full-text available
Skin Cancer accounts for one-third of all diagnosed cancers worldwide. The prevalence of skin cancers have been rising over the past decades. In recent years, use of dermoscopy has enhanced the diagnostic capability of skin cancer. The accurate diagnosis of skin cancer is challenging for dermatologists as multiple skin cancer types may appear similar in appearance. The dermatologists have an average accuracy of 62% to 80% in skin cancer diagnosis. The research community has been made significant progress in developing automated tools to assist dermatologists in decision making. In this work, we propose an automated computer-aided diagnosis system for multi-class skin (MCS) cancer classification with an exceptionally high accuracy. The proposed method outperformed both expert dermatologists and contemporary deep learning methods for MCS cancer classification. We performed fine-tuning over seven classes of HAM10000 dataset and conducted a comparative study to analyse the performance of five pre-trained convolutional neural networks (CNNs) and four ensemble models. The maximum accuracy of 93.20% for individual model amongst the set of models whereas maximum accuracy of 92.83% for ensemble model is reported in this paper. We propose use of ResNeXt101 for the MCS cancer classification owing to its optimized architecture and ability to gain higher accuracy.
Article
Full-text available
Recent studies have demonstrated the usefulness of convolutional neural networks (CNNs) to classify images of melanoma, with accuracies comparable to those achieved by dermatologists. However, the performance of a CNN trained with only clinical images of a pigmented skin lesion in a clinical image classification task, in competition with dermatologists, has not been reported to date. In this study, we extracted 5846 clinical images of pigmented skin lesions from 3551 patients. Pigmented skin lesions included malignant tumors (malignant melanoma and basal cell carcinoma) and benign tumors (nevus, seborrhoeic keratosis, senile lentigo, and hematoma/hemangioma). We created the test dataset by randomly selecting 666 patients out of them and picking one image per patient, and created the training dataset by giving bounding-box annotations to the rest of the images (4732 images, 2885 patients). Subsequently, we trained a faster, region-based CNN (FRCNN) with the training dataset and checked the performance of the model on the test dataset. In addition, ten board-certified dermatologists (BCDs) and ten dermatologic trainees (TRNs) took the same tests, and we compared their diagnostic accuracy with FRCNN. For six-class classification, the accuracy of FRCNN was 86.2%, and that of the BCDs and TRNs was 79.5% (p = 0.0081) and 75.1% (p < 0.00001), respectively. For two-class classification (benign or malignant), the accuracy, sensitivity, and specificity were 91.5%, 83.3%, and 94.5% by FRCNN; 86.6%, 86.3%, and 86.6% by BCD; and 85.3%, 83.5%, and 85.9% by TRN, respectively. False positive rates and positive predictive values were 5.5% and 84.7% by FRCNN, 13.4% and 70.5% by BCD, and 14.1% and 68.5% by TRN, respectively. We compared the classification performance of FRCNN with 20 dermatologists. As a result, the classification accuracy of FRCNN was better than that of the dermatologists. In the future, we plan to implement this system in society and have it used by the general public, in order to improve the prognosis of skin cancer.
Chapter
Full-text available
Skin cancer, a concerning public health predicament, with over 5,000,000 newly identified cases every year, just in the United States. Generally, skin cancer is of two types: melanoma and non-melanoma. Melanoma also called as Malignant Melanoma is the 19th most frequently occurring cancer in women and men. It is the deadliest form of skin cancer [1]. In the year 2015, the global occurrence of melanoma was approximated to be over 350,000 cases, with around 60,000 deaths. The most prevalent non-melanoma tumours are squamous cell carcinoma and basal cell carcinoma. Non-melanoma skin cancer is the 5th most frequently occurring cancer, with over 1 million diagnoses worldwide in 2018 [2]. As of 2019, greater than 1.7 Million new cases are expected to be diagnosed [3]. Even though the mortality is significantly high, but when detected early, survival rate exceeds 95%. This motivates us to come up with a solution to save millions of lives by early detection of skin cancer. Convolutional Neural Network (CNN) or ConvNet, are a class of deep neural networks, basically generalized version of multi-layer perceptrons. CNNs have given highest accuracy in visual imaging tasks [4]. This project aims to develop a skin cancer detection CNN model which can classify the skin cancer types and help in early detection [5]. The CNN classification model will be developed in Python using Keras and Tensorflow in the backend. The model is developed and tested with different network architectures by varying the type of layers used to train the network including but not limited to Convolutional layers, Dropout layers, Pooling layers and Dense layers. The model will also make use of Transfer Learning techniques for early convergence. The model will be tested and trained on the dataset collected from the International Skin Imaging Collaboration (ISIC) challenge archives.
Article
Full-text available
The present research work is concentrated around the machining of titanium grade 5 alloy using electric discharge drilling. The process response parameters viz. hole taper and hole dilation have been calculated by performing experiments. During the experiments titanium grade 5 alloy has been taken as a workpiece with brass electrode as a tool. The experimentally calculated process parameters have been optimized using grey relational analysis. The main objective of this research is to maximize the hole circularity along with minimum hole taper and dilation by selecting the optimal input parameters. From the results it has been found that discharge current is the most effective parameter for the given conditions.
Conference Paper
Full-text available
Skin cancer is an alarming disease for mankind. The necessity of early diagnosis of the skin cancer have been increased because of the rapid growth rate of Melanoma skin cancer, it&sacute; high treatment costs, and death rate. This cancer cells are detected manually and it takes time to cure in most of the cases. This paper proposed an artificial skin cancer detection system using image processing and machine learning method. The features of the affected skin cells are extracted after the segmentation of the dermoscopic images using feature extraction technique. A deep learning based method convolutional neural network classifier is used for the stratification of the extracted features. An accuracy of 89.5% and the training accuracy of 93.7% have been achieved after applying the publicly available data set.
Article
The Fiber Refinement Polymer Composites (FRPCs) are of great interest nowadays. Composites are employed in diverse areas such as automobiles, aerospace, and the military due to their multiple advantages for better mechanical, thermal, and fracture toughness features. Several researchers have constructed FRPCs using different natural fibers and matrix materials. However, to complete the assembly of the components or pieces, FRPC's actual applications require some secondary activities. Very few scientists discussed the machinability of these FRPCs. The main problems in fiber-reinforced composite plastic machining were discussed in this review concerning the most recent research in this field. In contrast, the unconventional and conventional machining processes were considered, this paper includes annual research in production, traditional machining, unconventional machining, and hybrid MMC machining. The final part of the paper addresses conclusions and future scope.
Article
In the present analysis 15 experiments were performed in conjunction with the Box-Behnken architecture matrix based on the machining parameter's effect, like spindle speed, feed rate, and cutting width., A surface roughness mathematically framework was designed using the surface reaction methods of this model to aid a genetic algorithm. Which is used to decide the optimum machining parameters. Response surface methodology has been used in this paper due to certain advantages as compare to other methodology such as it needs fewer experiments to study the effects of all the factors and the optimum combination of all the variables can be revealed. Finally, a genetic algorithm was used to determine the optimum setting of process parameters that maximize the rate of content removal. The best surface roughness response value obtained from single-objective genetic algorithm optimization was 1.19 μm.