ArticlePDF Available

Identity verification using palm print microscopic images based on median robust extended local binary pattern features and k‐nearest neighbor classifier

Authors:

Abstract and Figures

Automatic identity verification is one of the most critical and research-demanding areas. One of the most effective and reliable identity verification methods is using unique human biological characteristics and biometrics. Among all types of biometrics, palm print is recognized as one of the most accurate and reliable identity verification methods. However, this biometrics domain also has several critical challenges: image rotation, image displacement, change in image scaling, presence of noise in the image due to devices, region of interest (ROI) detection, or user error. For this purpose, a new method of identity verification based on median robust extended local binary pattern (MRELBP) is introduced in this study. In this system, after normalizing the images and extracting the ROI from the microscopic input image, the images enter the feature extraction step with the MRELBP algorithm. Next, these features are reduced by the dimensionality reduction step, and finally, feature vectors are classified using the k-nearest neighbor classifier. The microscopic images used in this study were selected from IITD and CASIA data sets, and the identity verification rate for these two data sets without challenge was 97.2% and 96.6%, respectively. In addition, computed detection rates have been broadly stable against changes such as salt-and-pepper noise up to 0.16, rotation up to 5°, displacement up to 6 pixels, and scale change up to 94%.
Content may be subject to copyright.
RESEARCH ARTICLE
Identity verification using palm print microscopic images based
on median robust extended local binary pattern features and
k-nearest neighbor classifier
Amjad Rehman
1
| Majid Harouni
2
| Negar Haghani Solati Karchegani
3
|
Tanzila Saba
1
| Saeed Ali Bahaj
4
| Sudipta Roy
5
1
Artificial Intelligence & Data Analytics Lab
CCIS, Prince Sultan University, Riyadh, 11586,
Saudi Arabia
2
Department of Computer Engineering,
Dolatabad Branch, Islamic Azad University,
Isfahan, Iran
3
Faculty of Computer Engineering, Najafabad
Branch, Islamic Azad University, Isfahan, Iran
4
MIS Department College of Business
Administration, Prince Sattam bin Abdulaziz
University, Al-Kharj, Saudi Arabia
5
Artificial Intelligence & Data Science
Programme, JIO Institute, Navi Mumbai,
Maharashtra, India
Correspondence
Majid Harouni, Department of Computer
Engineering, Dolatabad Branch, Islamic Azad
University, Isfahan, Iran.
Email: m.harouni@iauda.ac.ir
Funding information
No specific funding was received for this
research.
Review Editor: Alberto Diaspro
Abstract
Automatic identity verification is one of the most critical and research-demanding
areas. One of the most effective and reliable identity verification methods is using
unique human biological characteristics and biometrics. Among all types of biomet-
rics, palm print is recognized as one of the most accurate and reliable identity verifi-
cation methods. However, this biometrics domain also has several critical challenges:
image rotation, image displacement, change in image scaling, presence of noise in the
image due to devices, region of interest (ROI) detection, or user error. For this pur-
pose, a new method of identity verification based on median robust extended local
binary pattern (MRELBP) is introduced in this study. In this system, after normalizing
the images and extracting the ROI from the microscopic input image, the images
enter the feature extraction step with the MRELBP algorithm. Next, these features
are reduced by the dimensionality reduction step, and finally, feature vectors are clas-
sified using the k-nearest neighbor classifier. The microscopic images used in this
study were selected from IITD and CASIA data sets, and the identity verification rate
for these two data sets without challenge was 97.2% and 96.6%, respectively. In
addition, computed detection rates have been broadly stable against changes such as
salt-and-pepper noise up to 0.16, rotation up to 5, displacement up to 6 pixels, and
scale change up to 94%.
KEYWORDS
legal identity for all, local descriptors, median robust extended local binary pattern,
microscopic palm print images, security
1|BACKGROUND
Identifying individuals based on biometrics is one of the most urgent
and vital issues in many areas, including security and surveillance
(Saba, Haseeb, Ahmed, & Rehman, 2020). A wide range of individual
identification systems need to be identified and verified (Raza
et al., 2018; Rashid et al., 2020). For example, in security agencies, the
office attendance device, ATMs, and even computer systems have
been revealed the importance of identity verification more than
before. Biometric technology identifies people based on unique
human characteristics such as the face, iris, vessels, and palm print
(Fahad, Khan, Saba, Rehman, & Iqbal, 2018; Jamal, Hazim Alkawaz,
Rehman, & Saba, 2017; Rahim, Rehman, Kurniawan, & Saba, ). In
recent years, biometric technology with fingerprint features has been
widely used. Still, for some reasons such as finger surface moisture,
fingerprint degradation, and so on, it has always received severe criti-
cism (Saba, Rehman, Altameem, & Uddin, 2014b). In addition, multiple
biometrics such as fingerprints, faces and iris, palm vessels, and palm
Received: 1 August 2021 Accepted: 17 August 2021
DOI: 10.1002/jemt.23989
Microsc Res Tech. 2021;114. wileyonlinelibrary.com/journal/jemt © 2021 Wiley Periodicals LLC. 1
print, have been introduced (Ahmad, Sulong, Rehman, Alkawaz, &
Saba, 2014; Meethongjan, Dzulkifli, Rehman, Altameem, &
Saba, 2013). Still, recent studies show that palm print, one of the best
biometrics' features, can provide unique information about humans
for machines identification (Khan, Javed, Sharif, Saba, &
Rehman, 2019; Lung, Salam, Rehman, Rahim, & Saba, 2014). As with
other biometrics, palm prints are unique, stable, and unchanged over
time. Despite these advantages, palm print-based identity recognition
systems still face a significant challenge called improper alignment,
region of interest (ROI), in addition to noise-impacting images (Saba,
Rehman, Altameem, & Uddin, 2014b; Iqbal et al., 2019; Joudaki et al.,
2014). This may eventually lead to the loss of several pixels or alter
some of the image information. This may also be due to human error,
which is known as a common and inevitable challenge (Khan et al.,
2021). Due to insufficient knowledge of how to use the system, when
identifying, the users might put the hand on the scanner in a state
other than the standard defined. This unavoidable error reduces the
identification rate and disrupts the identification of some cases.
Therefore, using a resistant method to this challenge seems to be an
essential solution (Jabeen et al., 2018). Zhang, Wang, Huang, and
Zhang (2018) presented a new method of identity verification by
combining two features of weighted adaptive center symmetric local
binary pattern (WACSLBP) and weighted Sparse Representation-
Based Classification (WSRC). In their research, WACSLBP is used to
extract important features such as image edges and major palm print.
In this method, the CSLBP histogram is first removed from each sub-
block, then if the subblock attribute information is high, the histo-
gram will gain more weight and the subblock with less information
will be assigned less weight. Finally, the produced histogram contains
the most important information on the palm print image's edges,
major lines, and wrinkles.
Finally, in this study, to reduce the computational complexity of
the classification for the identification stage, the information differen-
tiated between the test sample and each training sample using the
Sparse Representation-based Classification weight structure of the
WSRC algorithm is employed (Iftikhar, et al., 2017; Nodehi et al.,
2014). The CASIA and PolyU data sets were used to evaluate the pro-
posed method and the test images were identified in ten inappropri-
ate image modes (CASIA Dataset). At the end of the study, a 98%
identity verification rate was reported, which is considered acceptable
about the challenges of the rate test images. Nonetheless, this
research could have been more comprehensive by applying important
challenges such as displacement, rotation and scale of research.
Rida, Herault, Marcialis, and Gasso (2019) examined the impact of
using ensemble classification based on data-driven features in their
research. The purpose of this study was to present a method of group
classification using the Random Subspace Method. This method relies
on the two-dimensional principal component analysis (2DPCA) non-
random technique and provides approximately distinct random sub-
sets. Thus, after calculating the data variance in two-dimensional
space, the first components are considered the strongest image infor-
mation and generated a subset of data. At this point, the attributes of
each subdomain are extracted using two-dimensional linear discrimi-
nant analysis (2DLDA). Then, in the classification section using a 1
class the nearest neighborhood of these subsets is classified. At the
end, the identification is carried on using an identification method of
voting. The data used in this study were selected from the PolyU data
set, and the method presented on these images was evaluated in red,
green, blue, infrared, colored, and 2D bands. The results of this evalu-
ation of the test images show that in all of the mentioned challenges
except the 2D images, the detection accuracy was higher than 99%.
In the 2D images challenges, the test images were identified with 94%
accuracy. According to the obtained results, this is qualitative
research, although the method is very fast in terms of time complex-
ity, but the challenges posed in this research are far from the chal-
lenges in reality and therefore cannot be investigated, and from this
perspective, it cannot be known as comprehensive research.
Almaghtuf and Khelifi (2018) presented a method based on the
self-geometric relationship filter to improve the fit of points selected
by the scale-invariant feature transform (SIFT) algorithm. In this study,
due to the weakness of the SIFT algorithm in point matching, a geo-
metric filter to establish relationships between selected points and
compare the geometric patterns for identification has been used. In
the preprocessing section of the ROI, the full image is extracted. The
study is performed on the PolyU II data sets. Then, the images of the
other two data sets are presented as ROIs. Finally, the SIFT algorithm
is applied to the ROI from palm print images to extract key features in
the next step. However, their approach is based on Euclidean distance
that could not meet the challenges of this field.
For this reason, a central point filtering method is introduced in
response to this challenge. So that after selecting several points as the
central point at defined angles, the nearest points to the central point
are identified and connected to the central point. Multiple graphs are
created for each image, and finally, they will identify the central points
with regard to the points connected to them. Evaluation of this
method was carried out using three data sets IIT-Delhi, PolyU, and
THUPALMLAB. Their study might not be comprehensive despite the
good results since it has not addressed rotation, displacement, or
image scaling issues. El-Tarhouni, Boubchir, Elbendak, and Bouridane
(2019) presented an approach based on pyramid histogram orienta-
tion gradient and Pascal coefficients of local binary patterns to
improve the identity verification rate using palm print images. Their
research is composed of a few stages. In the first stage, only the Pas-
cal Coefficient Multiscale Local Binary Pattern (PCMLBP) descriptor
was used to extract the features to compare the primary method of
this study. Secondly, Pyramid Histogram Orientation Gradient
descriptor has been used along with PCMLBP. Using this descriptor
makes the proposed method resistant to light changes. Thirdly, PCA is
used to reduce the dimensionality of the feature vectors and finally
the random subspace LDA is used for classification.
The classification methods could be divided into two categories;
first, calculating the confidence values for each class, and second,
determining the class indicator whose test sample has reached the
maximum score in relation to the class (Rehman et al., 2021; Khan
et al., 2020). The image data set used in this study is PolyU, and the
images are analyzed in four red, green, blue, and infrared bands. Also,
the detection rate in the proposed method is over 99% across all four
bands. But what distinguishes this research from comprehensive
2REHMAN ET AL.
research is that it does not evaluate any significant challenges and is
far from a real-world implementable system (Saba, 2020). Minaee and
Wang (2016) used a convolutional neural network and developed a
new way of identity verification. This research has tried to improve
the identity verification rate in palm print images by deep learning.
In the same way, after preprocessing the images, the feature of
the images is extracted using a scatter wavelet. The important thing
about using this descriptor is that it first has a lot of parameters that
include the degree, direction, and depth of the wavelet. The degree
and direction of the filters used in the algorithm were a very large set
of Gabor Violet filters. The wavelength was related to the dispersion
of the algorithm used in the proposed method. The first depth for this
algorithm worked much like the SIFT algorithm, and the second depth
acted like the directed sphere exclusion algorithm. Since the more
algorithms operated in their research, the larger and distinctive fea-
ture vectors were produced, so the algorithm's time complexity
increased. Finally, after a subsequent reduction step by PCA using
SVM classification, all the feature vectors had been classified and
identity verification had been obtained. This method has been evalu-
ated using PolyU data set images, and a 100% recognition rate has
been claimed. However, although this method has the highest possi-
bility of identification, no major challenge has been addressed in this
study to test the robustness of the approach against changes.
Fei et al. (2017) proposed a method based on local binary pat-
terns for identity verification with palm print imagery. They tried to
improve the algorithm detection of local binary patterns. In this study,
a local orientation binary model (LOBP) is introduced, in which Orien-
tation Binary Pattern (OBP) and Confidence Binary Pattern (CBP) sta-
tistical blocks are used to create a global high-speed descriptor. In
their method, the image is first divided into 16 16 nonoverlapping
blocks. The principal orientation, corresponding values, and confi-
dence are extracted for each block to obtain the OBP and CBP maps.
The OBP and CBP histograms are then calculated. The OBP and CBP
histograms are merged to create a global histogram, and the final fea-
ture vector is created. Finally, by adopting the distance calculation
system, identification is made. For evaluating the proposed method,
three data sets of PolyU, M Green, and IITD were used. The results
show that the proposed method outperforms the methods being
assessed and has been able to calculate the identity verification at
100%. However, this study has addressed no significant challenges,
which indicates that the study is incomplete. Mokni, Drira, and
Kherallah (2017) used contexture and geometric properties in their
study for calculating identity verification. This study presents a new
method for extracting major palm prints and a new geometric struc-
ture to analyze geometric shapes. In the geometric properties section,
the major palm print lines are extracted using three types of curves:
steerable and hysteresis filters. Then new curve lines are generated by
averaging operations of the curves. In the contexture properties sec-
tion, palm print image features are extracted using fractal analysis and
calculating fractal dimensions. The extracted features are merged at
the end of this section to improve identification accuracy in geometric
and contexture sections. At the end of this study, the data are catego-
rized by random forest based on geometrical, contexture, and merged
features, and the identification rate of each segment is calculated
separately. Three data sets of PolyU, CASIA, and IIT-Delhi were used
to evaluate the proposed method. The evaluation results show that
the identification rate with geometric properties is 93%, the contex-
ture features 96%, and the integration of the two features 98%.
Although this study yields acceptable results and confirms the effect
of integrating the results of different descriptors, it has not been eval-
uated in challenges such as rotation, transfer, and scale, which could
be a flaw for this research (Saba et al., 2014b).
Tamrakar and Khanna (2016) used a Resource Description Frame-
work (RDF) descriptor to investigate the identity verification of palm
print images in their study to provide a stable rotation and noise
method. The study used three data sets that extract the ROI of two
data sets from the image, which means that this study involves a
preprocessing step. In the feature extraction step, RDF descriptor is
used based on radon, dual-tree complex wavelets, Fourier transforms.
The integration of the properties of these three descriptors
strengthens the proposed method against rotation and noise. A dual-
tree complex wavelet is applied to the radon extraction angle in a
multi-resolution path since radon coefficients contain contexture
information from the image and after calculating the radon coeffi-
cients. Then, high-frequency coefficients are extracted to extract the
feature. Finally, applying the instant two-dimensional conversion elim-
inates circular transmission. Lastly, it should be noted that this method
is very stable compared to the conditions mentioned above.
In the end, the vectors are classified by the nearest neighborhood
cluster. The data used to evaluate this study are from the three data
sets PolyU, CASIA, and IIT-Delhi, and the results show that the pro-
posed method is relatively stable to rotation and noise. Still, funda-
mental challenges such as transitions and scales are not investigated
in the above study. Xu, Fei, and Zhang (2014) combined a fascinating
study of left and right palm print images to improve the identity verifi-
cation rate. This research has been implemented in three different
phases. The first two phases are about identity verification using left-
and right-hand images. The third phase is about identity verification
by combining the two hand identifications. Lastly, the final decision
phase of this research is about integrating left- and right-hand results,
and both hands are at the level of decision. In the preprocessing stage,
only the ROI of the hand is removed from the image. The extracted
features of this study are very comprehensive and use a wide range of
descriptors, including Gabor filter, Sparse Multiscale Competitive
Code (SMCC), derivatives of Gaussians, and Sparse Multiscale Com-
petitive Code, and are adapting SIFT, PCA, LDA, 2DLDA, and 2DPCA
algorithms.
Collaborative Representation-based Classification and Two-Phase
Test Sample Sparse Representation have been used, which can be
unreliable and have compared several categories to identify the best
method. The images used to evaluate the proposed method are from
both PolyU and IITD data sets, and the results show that the proposed
method has a very appropriate identification rate, but the only point
that can be considered as a flaw for this realization is the lack of
investigation of the proposed method with significant challenges such
as noise, rotation, and scale is observed on images. As mentioned
before, most of the research is implemented in a common format and
only performs the identification process in the most ideal way. But
REHMAN ET AL.3
the world of reality is the world of error such as error in identifying
palm print, error in scanner, and malfunction. The existence of such
errors indicates the need to implement methods concerning such
errors. For example, an error in correctly identifying the ROI region by
the segmentation algorithm can result in the image generated from
the ROI region relative to the target image, with some rotation, trans-
fer, scale resizing, and so on (Khan et al., 2021).
On the other hand, the scanner has a technical problem and the
scanned images are noisy (Saba, Rehman, Al-Dhelaan, Al-Rodhaan,
2014). If all of the above are put together by human errors in using
security systems, what is certain is that the real situation is by no
means the ideal one (Sharif et al., 2019). This article's main purpose
is to define several unavoidable challenges in real systems to investi-
gate the robustness of the method presented in this study against
these changes. These challenges include noise, scale change, trans-
mission, and rotation. Figure 1 illustrates an example of these
challenges.
As shown in Figure 1, there are two types of changes in the
image. The first change is in the pixel brightness values, that is, by
rotating, moving, and resizing all pixel values and by applying a
large portion of the pixel brightness values. The second change is
the loss of a portion of the image lost by the challenge of transmit-
ting a portion of the pixels of the image border. Therefore, the pur-
pose of this study is to present a method that can generate distinct
feature vectors despite overall changes in the image to classify the
images into correct classes. For this purpose, in this study, we use
median robust extended local binary pattern (MRELBP; Liu
et al., 2016) to extract features from images. Instead of extracting
attributes from pixel values, this method uses a neighborhood area
to generate the attribute of each pixel. This causes changes in pixel
values to produce less change in the output of image feature vec-
tors, and the feature vectors have the required stability against
changes.
This research is further composed of four main sections. In
Section 2, the MRELBP algorithm is presented, and Section 3 presents
proposed method in detail. Section 4 exhibits results and discussion,
and finally, Section 5 concludes the research.
2|THE MRELBP ALGORITHM
The RELBP algorithm is of three GRELBP, ARELBP, and MRELBP fam-
ilies (Liu et al., 2016). The difference among these three algorithms is
the type of filter used. If the GOSIN filter is used in the RELBP algo-
rithm, this is called the GRELBP algorithm. If the average filter is used
in the RELBP algorithm, this is called ARELBP algorithm, and if the
median filter is used in this algorithm, the generated algorithm is called
MRELBP. The RELBP algorithm behaves very similar to ELBP and con-
sists of three descriptors RELBP_CI, RELBP_NI, and RELBP_RD. The
only difference with ELBP is that instead of using a pixel in the com-
putation, it uses a block for analysis and for having a value of one par-
ticular unit of block, it uses three filters, namely, GOSIN, mean, and
median which are listed. This section will provide a detailed descrip-
tion of this algorithm.
FIGURE 1 (a) Image without challenge, (b) image with rotation challenge, (c) transition challenge image, (d) image with scale change challenge,
and (e) noise challenge image
4REHMAN ET AL.
2.1 |RELBP_CI descriptor
This descriptor is calculated by Equations (1) and (2).
RELBP_CI xc
ðÞ¼sϕXc,ω
ðÞμω
ðÞ,ð1Þ
μ3¼1
NX
N
c¼0
ϕXc,ω
ðÞ:ð2Þ
Figure 2 shows the block in which the central pixel is located and
is shown in gray in this Figure. Also, in Equation (2), μ3represents the
average brightness of the block Xc,ω
ðÞand ϕXc,ω
ðÞapplying one of the
filters to the block Xc,3
ðÞ. The function Scompares the desired block
with the specified threshold, such that if the value inside this function
is less than zero, the result of the whole function becomes zero, and if
the value inside this function is higher than zero, the result of the
whole function is 1 and multiplied in 2n. So, RELBP_CI xc
ðÞfor each
block can have only two values 0 or 1.
2.2 |RELBP_NI descriptor
This descriptor is calculated by Equations (2) and (3).
RELBP_NIr,pxc
ðÞ¼
X
p1
n¼0
sϕXr,p,ωr,n
ðÞμr,p,ωr

2n,ð3Þ
μr,p,ωr¼1
pX
p1
n¼0
ϕXr,p,ωr,n
ðÞ:ð4Þ
In Equations (3) and (4), rrepresents the neighborhood radius
and pthe number of neighborhood blocks with the radius and dimen-
sions of the neighborhood block. Regarding identifying neighboring
blocks, these blocks are at angles similar to the ELBP method with
centering the Xc,ω
ðÞand radius rlocated in the 2π=paspects. Figure 2
shows the location of adjacent blocks. For example, and a summary
for this section, it should be noted that according to Figure 5,
RELBP_NI for a radius r2equal to the LBP algorithm is applied to the
median (and also mean or Gaussian filter) of the blue blocks and are
considered as threshold values of the mean of the median of the blue
blocks.
2.3 |ELBP_RD descriptor
This descriptor is calculated using Equation (5).
RELBP_RDr,r1,p,ωr,ωr1xc
ðÞ¼
X
p1
n¼0
sϕXr,p,ωn,nϕXr1,p,ωr1,n
ðÞ2n:ððð5Þ
As shown in Figure 3, the radial difference in the radius r,r1,
and the number of pblocks are equal to the LBP algorithm applied
FIGURE 2 Algorithm works RELBP
REHMAN ET AL.5
to the filtered radius rblocks which are equal to the threshold of
the filtered blocks r1. Figure 3 shows how this descriptor is calcu-
lated. For example, the middle of the red block is reduced to the mid-
dle of the blue block and the LBP algorithm is applied to it. All
three categories of attributes are derived from the descriptors
described in the LBP codes. Therefore, it is possible to integrate
these features to produce more distinctive feature vectors (Liu,
Zhao, Long, Kuang, & Fieguth, 2012). The histogram is produced by
integrating the three descriptors RELBP_CI, RELBP_NI, and
RELBP_RD as the final image histogram is generated in the last step.
Figure 3 illustrates how to integrate the properties of these three
descriptors.
2.4 |Features extraction process
The features extraction methods reported in Section 1 are calcu-
lated on a radial difference layer and a neighborhood layer.
FIGURE 3 Results integration of three descriptors. CP, Central pixel; LBP, Local binary pattern; NP, Neighboring pixels; RD, Radial difference
6REHMAN ET AL.
However, this study used four layers of neighbor and radial diver-
gence to extract the feature from the image(s) and apply filters on
the blocks using a median filter. Finally, the histograms generated
from each layer are generated by combining the histograms
RELBP_CI, RELBP_NI, and RELBP_RD and merging and producing
the final image feature vector. Figure 4 shows an overview of the
method of Liu et al. (2012).
3|PROPOSED METHOD
Theidentityverificationsystemintroducedinthisresearchis
based on a local feature extractor and a supervised classification.
Like many similar systems, the system consists of four stages:
palm print segmentation, feature extraction, feature selection, and
classification. Figure 5 shows the proposed method in the training
phase.
3.1 |Palm print area segmentation
In this study, images of two data sets (Kumar, 2008; Kumar &
Shekhar, 2010) IITD and CASIA (CASIA data set) were used. The
IITD data set images are provided as ROIs, and the images in this data
set do not need to be segmented. Still, the CASIA data set images are
raw and without any preprocessing, so processing the images in this
data set data requires a segmentation and ROI extraction step. There-
fore, Zhang et al.'s (2018) method is used to localize the images in this
data set. Figure 6 shows a sample of CASIA data set images before
and after segmentation.
3.2 |Features extraction
Unique features extraction is a critical stage in the whole process of
identity verification (Jadooki, Mohamad, Saba, Almazyad, &
FIGURE 4 Integrating the RELBP algorithm into four layers. CI, Central intensity; NI, Neighboring intensities; RD, Radial difference; RELPB,
Robust extended local binary pattern
REHMAN ET AL.7
Rehman, 2017; Yousuf, Mehmood, Habib, et al., 2018). In the identifi-
cation process, feature extraction is one of the most sensitive and
challenging stages of research (Mittal et al., 2020; Saba, 2019; Saba,
Almazyad, & Rehman, ).
As mentioned in Subsection 2.4, generalized sustainable binary
patterns (MRELBP) were employed to extract the features. First, the
nonoverlapping images are divided into 16 blocks completely equal.
Then, the properties of RELBP_CI, RELBP_NI, and RELBP_RD are
extracted following Figure 4 for the pixels of each block in four
layers, and according to Figure 5, the features extracted by these
three descriptors are merged into the four layers described in
Section2.Thisstepisrepeatedforall16blocks,andfinally,the
property of the 16 blocks is combined and the image property vec-
tor is generated.
3.3 |Features selection
Features selection is the process of selecting the most discriminative
features from selected features that could identify the object uniquely
(Sharif et al., 2019; Amin, Sharif, Raza, Saba, & Anjum, 2019). Scene
analysis and search using local features and support vector machine
for effective content-based image retrieval. Since the MRELBP
descriptor extracts image properties in multiple layers and internal
descriptors, generating a large volume of features for each image is
very natural (S. A. Khan, Nazir, et al., 2019). But it should be borne in
mind that many of the features produced do not contain useful and
distinctive information about the image (Harouni et al., 2014;
Neamah, Mohamad, Saba, & Rehman, 2014). Therefore, after
extracting the feature from the images, an essential feature selection
FIGURE 6 CASIA data set microscopic images: (a) before
fragmentation and (b) after segmenting and extracting the ROI area.
ROI, Region of interest
FIGURE 5 Proposed method in the training phase. CP, Central pixel; LBP, Local binary pattern; NP, Neighboring pixels; RD, Radial difference
8REHMAN ET AL.
algorithm is performed on the generated feature vectors using the
PCA algorithm to reduce feature-length vectors and reduce computa-
tional and temporal complexity in the classification stage. Another
important point to note about PCA is that it eliminates the correlation
between attributes and enhances the differentiation of attribute vec-
tors of different classes, which ultimately improves the identification
rate (Rehman et al., 2020; Rehman, Alqahtani, Altameem, &
Saba, 2014).
3.4 |Classification
Following the production and improvement of feature images vectors
of training, it is time to identify the test feature vectors (Ramzan
et al., 2020). Figure 7 shows the block diagram of the classification of
test images.
The main purpose of this research is to identify individuals with
altered or damaged palm print images. In a clear sense, the images in
the training phase are without any challenges. Still, the identity of
these images or individuals must be tested with images that do not
meet the ideal conditions for training images. Thus, as shown in
Figure 7, the performance and efficiency of the proposed method are
tested with images subjected to challenges such as noise, scale
change, rotation, and transfer. In the remainder of this study, and in
Section 4, results of the proposed method's robustness against these
challenges are presented.
4|EXPERIMENTAL RESULTS
In this section, the efficiency of the proposed method is evaluated
using CASIA and IITD data set images since the proposed method in
this study contains many parameters in the feature extraction and
classification section. First, the method parameters are described in
Table 1; this table provides a complete description of the three
methods compared. These parameters will be constant at all stages of
the test.
In the following section, we first describe the images data set
used in experiments, results, and comparisons of proposed approach
in state of art.
4.1 |Data sets
Benchmark data sets play an important role in evaluating the reported
techniques and their results comparison in state of the art (Rehman
et al., 2018; Rehman & Saba, 2011). Therefore, microscopic palm print
images of both IITD and CASIA data sets were used to evaluate the
FIGURE 7 Block diagram of the classification of the test images. KNN, K-nearest neighbor; MRELBP, median robust extended local binary
pattern; PCA, principal component analysis
REHMAN ET AL.9
proposed method. Accordingly, the images of these two data sets are
divided into three sections. The first section contains images that
identify the best parameters in the proposed method descriptor and
classifier. The second part consists of instructional images that com-
prise 40% of the data set images, and the third is the images that the
proposed method is tested with. Table 2 shows the specifications of
both data sets.
4.2 |Test implementation hardware and software
platform
MATLAB R2017a simulates the proposed method on Windows
10 operating system, and the hardware platform used to simulate this
research is an Intel
®
Core i5-8500 system with 8 GB of RAM, plus
16 GB of RAM from the SSD hard drive.
4.3 |Analysis and discussion
A few essential points should be noted in detail for the implementa-
tion of the proposed methods and the methods being evaluated. First,
a sampling of training and test images was random in all experiments.
Five images were selected from six images per hand for training. One
image was rotated, transposed, scaled, and applied noise to the image
and was randomly selected for the test. In the method proposed in
this paper, the set of images used in the experiments to identify the
best parameters is removed from the set of images of the original
experiments. The data sets used in these experiments for both data
sets were 600 images of 100 individuals (five images were taken from
each person for training and one image for the test). Then, the intro-
duced and compared method was repeated 10 times with random
sampling for the test and training set and recorded for each run of
identification accuracy. The accuracy of the identification recorded in
the results table is 10 times the average of these tests as well as tak-
ing into account the accuracy of identifying 10 times each test and
the 95% confidence interval of the confidence interval range of all
experiments (Saba et al., 2018). Tables 36 show the results of this
study.
Table 3 presents the results of image identification rates under
the influence of salt pepper noise in the range from 0 to 0.32. As can
be deduced from the results of this challenge, the detection rate is
not significantly affected by salt pepper noise, and when the noise
exceeds the range of 0.16, the detection rate decrease is more notice-
able. But consider Figure 2 to better understand the cause of this per-
sistence. It can be seen in this figure that first for a pixel, several
properties of neighboring radii are extracted; secondly, in each radius,
eight blocks are used to extract the properties, and thirdly, each block
is used as a threshold value for computation. The use of a middle filter
essentially makes any method resistant to noise (Saba et al., 2014a). In
addition, feature extraction from a relatively large area causes partial
damage to the area to have little effect on the quality of the proper-
ties extracted from the entire area. Figure 8 compares the identifica-
tion rates of the two data sets used in this study under noise
challenge.
Table 4 presents the results of the identification challenge with
the rotation challenge between 0 and 6. As it can be seen, the pro-
posed method still has a relatively stable rotation. But the reason for
this stability is that the feature is extracted for a pixel in the four cir-
cles. Thus, to some extent, the rotation of the image only causes the
selected block areas to change slightly, which cannot significantly
impact the detection rate. Figure 9 compares the identification rates
of the two data sets used in this study under the influence of the rota-
tion challenge.
In Table 5, the proposed method results are compared with the
challenge of transmitting image pixels in the range from 0 to 12 pixels
in the xaxis direction. The values obtained by calculating the
TABLE 1 Parameters' description in the methods
Method Specification
Proposed
method
Neighborhoods used to extract the properties of
each pixel are four neighbors.
The filter used in the RELBP algorithm is the
middle filter, so the method used is MRELBP.
Neighborhood radii in the proposed method are 2,
4, 6, and 8.
There are eight blocks in each neighborhood area,
so the center of the blocks is 45from each other
relative to the central block.
Neighborhood blocks are 2 pixels in diameter and
the remaining neighbors are 5 pixels.
The neighborhood radius of the KNN class
is K=1.
In the noise challenge of this study, salt-and-
pepper noise test images were applied in the range
from 0 to 0.32.
In the rotation challenge of this study, the test
images were evaluated in the range from 0 to 6.
To the detriment of this research criticism, the test
images move in the positive direction of the xaxis
between 0 and 12 pixels.
In the scale challenge of this study, the test images
will have a scale change from 0 to 12.
Abbreviations: KNN, K-nearest neighbor; MRELBP, median robust
extended local binary pattern.
TABLE 2 Specifications of the databases
Database
IITD
database
CASIA
database
No. of subjects 235 312
Number of still images per subject
(two hand)
14 More than 20
samples
Distance Variable Stable
Resolution 800 600 640 480
ROI resolution 150 150 200 200
Format .bmp .JPG
Abbreviation: ROI, Region of interest.
10 REHMAN ET AL.
TABLE 3 Identification results of the salt-and-pepper noise challenge
Noise level
0 0.02 0.04 0.08 0.16 0.32
Average IITD 97.2 97.45 97.55 97.20 96.85 87.65
IITD confidence interval 96.5697.83 96.8798.11 96.7598.34 96.7497.65 96.0297.67 85.8689.43
CASIA average 96.60 94.15 92.22 90.10 86.85 81.20
CASIA confidence interval 95.8797.33 93.3095.00 91.2493.20 89.1891.02 85.7587.95 79.6382.77
TABLE 4 Results of the spin challenge identification
Rotation rate
023456
Average IITD 97.2 96.85 95.50 94.10 91.65 87.55
IITD confidence interval 97.8396.56 97.5096.19 96.4894.51 94.9393.26 93.0490.25 89.3685.73
CASIA average 96.60 95.96 93.70 92.20 89.11 86.15
CASIA confidence interval 95.8797.33 95.1596.77 92.7594.65 91.1693.24 88.0390.19 84.8087.50
TABLE 6 Results of scale challenge identification
Scale rate
0236912
Average IITD 97.2 97.45 96.95 94.25 86.10 54.25
IITD confidence interval 96.5697.83 96.8798.02 96.1397.76 93.5194.98 84.3487.85 43.0865.41
CASIA average 96.60 96.22 96.39 95.02 90.77 68.56
CASIA confidence interval 95.8797.33 95.4197.03 95.4697.32 94.1295.92 89.6091.94 66.9670.16
TABLE 5 Results of transition identification challenge
Transfer rate
0136912
Average IITD 97.2 96.90 96.55 95.70 90.70 82
IITD confidence interval 96.5697.83 96.2197.60 95.8097.29 94.6596.74 89.6891.71 80.7883.21
CASIA average 96.60 96.45 95.22 94.84 91.36 85.20
CASIA confidence interval 95.8797.33 95.5997.31 94.3296.12 93.8595.83 90.3092.42 83.9986.41
FIGURE 8 Comparison of IITD and CASIA data set identification rate comparison under noise challenge
REHMAN ET AL.11
FIGURE 9 Comparison of IITD and CASIA data set identification rates under the rotation challenge
FIGURE 10 Comparison of IITD and CASIA data set identification rate under transition challenge
FIGURE 11 Comparison of IITD and CASIA data set identification rates under scale challenge
12 REHMAN ET AL.
detection rate of the proposed method in this challenge also indicate
the stability of the proposed method against image transfer. It can be
said that the stability of the proposed method is that only part of the
image margin is separated from the original image and the proposed
method extracts the necessary and distinctive attributes for each class
image from the remainder. Figure 10 compares the IITD and CASIA
data set identification rates under the transition challenge.
At the end of this section, the scale challenge results are pres-
ented in Table 6. This challenge shrinks the image of many of the tis-
sues in the image. But the proposed method still produces the
remaining texture vectors for each class with precise and distinctive
features, thus preventing a noticeable reduction in the identification
rate in the image. Figure 11 compares the IITD and CASIA data sets
identification rates under the scale challenge.
5|CONCLUSION
In this paper, a new method for identity verification based on palm print
microscopic images is presented. In the proposed method, four critical
identity verification challenges were investigated by image angular
modification, image transfer, image scaling, and noise. A local descriptor
was used as MRELBP. Extracting features from a large area around the
pixels makes the method very stable against many issues, including the
earlier challenges. Following features extraction, the PCA is used to
reduce the dimension and the correlation of the feature vectors, all of
which ultimately led to the proposed system's stability against changes.
Finally, k-nearest neighbor classifier attained state-of-the-art 97.2%
and 96.6% identity verification accuracy without challenges on two
benchmark data sets IITD and CASIA, respectively.
ACKNOWLEDGMENT
This research is technically supported by Artificial Intelligence and
Data Analytics Research Lab (AIDA) CCIS Prince Sultan University,
Riyadh, Saudi Arabia. The authors are thankful for this support.
CONFLICT OF INTEREST
The authors declare no conflict of interest and all authors contributed
equally.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are openly available in
[CASIA Dataset at [http://biometrics.idealtest.org/], reference number
[PALMPRINT]. The data that support the findings of this study are
openly available in [PolyU palm print database at [https://www4.
comp.polyu.edu.hk/~csajaykr/IITD/Database_Palm.htm], reference
number [1.0].
ORCID
Amjad Rehman https://orcid.org/0000-0002-3817-2655
Majid Harouni https://orcid.org/0000-0003-0798-6100
Tanzila Saba https://orcid.org/0000-0003-3138-3801
REFERENCES
Ahmad, A. M., Sulong, G., Rehman, A., Alkawaz, M. H., & Saba, T. (2014).
Data Hiding Based on Improved Exploiting Modification Direction
Method and Huffman Coding. Journal of Intelligent Systems,23(4),
451459. https://doi.org/10.1515/jisys-2014-0007
Almaghtuf, J., & Khelifi, F. (2018). Self-geometric relationship filter for effi-
cient SIFT key-points matching in full and partial palm print recogni-
tion. IET Biometrics,7(4), 296304.
Amin, J., Sharif, M., Raza, M., Saba, T., & Anjum, M. A. (2019). Brain tumor
detection using statistical and machine learning method. Computer
Methods and Programs in Biomedicine,177,6979.
CASIA Dataset. http://biometrics.idealtest.org/
El-Tarhouni, W., Boubchir, L., Elbendak, M., & Bouridane, A. (2019). Multi-
spectral palm print recognition using Pascal coefficients-based LBP
and PHOG descriptors with random sampling. Neural Computing and
Applications,31(2), 593603.
Fahad, H. M., Khan, M. U. G., Saba, T., Rehman, A., & Iqbal, S. (2018).
Microscopic abnormality classification of cardiac murmurs using
ANFIS and HMM. Microscopy Research and Technique,81(5), 449457.
https://doi.org/10.1002/jemt.22998
Fei, L., Xu, Y., Teng, S., Zhang, W., Tang, W. & Fang, X. (2017). Local orien-
tation binary pattern with use for palm print recognition. Paper pres-
ented at the Chinese conference on biometric recognition, 213220.
Harouni, M., Rahim, M. S. M., Al-Rodhaan, M., Saba, T., Rehman, A., & Al-
Dhelaan, A. (2014). Online Persian/Arabic script classification without
contextual information. The Imaging Science Journal,62(8), 437448.
https://doi.org/10.1179/1743131X14Y.0000000083
Iftikhar, S., Fatima, K., Rehman, A., Almazyad, A. S., & Saba, T. (2017). An
evolution based hybrid approach for heart diseases classification and
associated risk factors identification. Biomedical Research,28(8), 3451
13455.
Iqbal, S., Khan, M. U. G., Saba, T., Mehmood, Z., Javaid, N., Rehman, A., &
Abbasi, R. (2019). Deep learning model integrating features and novel
classifiers fusion for brain tumor segmentation. Microsc Res Tech,
82(8), 13021315. https://doi.org/10.1002/jemt.23281
Jabeen, S., Mehmood, Z., Mahmood, T., Saba, T., Rehman, A., &
Mahmood, M. T. (2018). An effective content-based image retrieval
technique for image visuals representation based on the bag-of-visual-
words model. PLoS One,13(4), e0194526.
Jadooki, S., Mohamad, D., Saba, T., Almazyad, A. S., & Rehman, A. (2017).
Fused features mining for depth-based hand gesture recognition to
classify blind human communication. Neural Computing and Applica-
tions,28(11), 32853294. https://doi.org/10.1007/s00521-016-
2244-5
Jamal, A., Hazim Alkawaz, M., Rehman, A., & Saba, T. (2017). Retinal imag-
ing analysis based on vessel detection. Microsc Res Tech,80(7), 799
811. https://doi.org/10.1002/jemt.22867
Joudaki, S., Mohamad, D. B., Saba, T., Rehman, A., Al-Rodhaan, M., & Al-
Dhelaan, A. (2014). Vision-based sign language classification: a direc-
tional review. IETE Technical Review,31(5), 383391.
Khan, M. A., Javed, M. Y., Sharif, M., Saba, T., and Rehman, A. (2019).
Multi-model deep neural network based features extraction and opti-
mal selection approach for skin lesion classification. 2019 international
conference on computer and information sciences (ICCIS) (pp. 17).
IEEE.
Khan, M. A., Sharif, M., Akram, T., Raza, M., Saba, T., & Rehman, A. (2020).
Hand-crafted and deep convolutional neural network features fusion
and selection strategy: An application to intelligent human action rec-
ognition. Applied Soft Computing,87, 105986. https://doi.org/10.
1016/j.asoc.2019.105986
Khan, A. R., Doosti, F., Karimi, M., Harouni, M., Tariq, U., Fati, S. M., & Ali
Bahaj, S. (2021). Authentication through gender classification from iris
images using support vector machine. Microscopy research and tech-
nique,84(11), 26662676.
REHMAN ET AL.13
Khan, S. A., Nazir, M., Khan, M. A., Saba, T., Javed, K., Rehman, A.,
Awais, M. (2019). Lungs nodule detection framework from computed
tomography images using support vector machine. Microscopcy
Research and Technique,82(8), 12561266.
Kumar, A. (2008). Incorporating cohort information for reliable palm print
authentication. Paper presented at the 2008 Sixth Indian Conference on
Computer Vision, Graphics and Image Processing, pp. 583-590.
Kumar, A., & Shekhar, S. (2010). Personal identification using multi-
biometrics rank-level fusion. IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews),41(5), 743752.
Liu, L., Lao, S., Fieguth, P. W., Guo, Y., Wang, X., & Pietikäinen, M. (2016).
Median robust extended local binary pattern for texture classification.
IEEE Transactions on Image Processing,25(3), 13681381.
Liu, L., Zhao, L., Long, Y., Kuang, G., & Fieguth, P. (2012). Extended local
binary patterns for texture classification. Image and Vision Computing,
30(2), 8699.
Lung, J. W. J., Salam, M. S. H., Rehman, A., Rahim, M. S. M., & Saba, T.
(2014). Fuzzy phoneme classification using multi-speaker vocal tract
length normalization. IETE Technical Review,31(2), 128136. https://
doi.org/10.1080/02564602.2014.892669
Meethongjan, K., Dzulkifli, M., Rehman, A., Altameem, A., & Saba, T.
(2013). An intelligent fused approach for face recognition. Journal of
Intelligent Systems,22(2), 197212.
Minaee, S., & Wang, Y. (2016). Palm print recognition using deep scatter-
ing convolutional network. arXiv preprint, arXiv:1603.09027.
Mittal, A., Kumar, D., Mittal, M., Saba, T., Abunadi, I., Rehman, A., & Roy, S.
(2020). Detecting Pneumonia Using Convolutions and Dynamic Cap-
sule Routing for Chest X-ray Images. Sensors,20(4), 1068.
Mokni, R., Drira, H., & Kherallah, M. (2017). Combining shape analysis and
texture pattern for palmprint identification. Multimedia Tools and Appli-
cations,76(22), 2398124008.
Neamah, K., Mohamad, D., Saba, T., & Rehman, A. (2014). Discriminative
features mining for offline handwritten signature verification, 3D.
Research,5(2), 16. https://doi.org/10.1007/s13319-013-0002-3
Nodehi, A., Sulong, G., Al-Rodhaan, M., Al-Dhelaan, A., Rehman, A., &
Saba, T. (2014). Intelligent fuzzy approach for fast fractal image com-
pression. Biomedical Research,2014(1), 19.
Ramzan, F., Khan, M. U. G., Rehmat, A., Iqbal, S., Saba, T., Rehman, A., &
Mehmood, Z. (2020). A Deep Learning Approach for Automated Diag-
nosis and Multi-Class Classification of Alzheimer's Disease Stages
Using Resting-State fMRI and Residual Neural Networks. Journal of
Medical Systems,44(2), 116.
Rashid, M., Khan, M. A., Alhaisoni, M., Wang, S. H., Naqvi, S. R.,
Rehman, A., & Saba, T. (2020). A sustainable deep learning framework
for object recognition using multi-layers deep features fusion and
selection. Sustainability,12(12), 5037.
Raza, M., Sharif, M., Yasmin, M., Khan, M. A., Saba, T., & Fernandes, S. L.
(2018). Appearance based pedestrians' gender recognition by
employing stacked auto encoders in deep learning. Future Generation
Computer Systems,88,2839.
Rehman, A., Alqahtani, S., Altameem, A., & Saba, T. (2014). Virtual machine
security challenges: case studies. International Journal of Machine
Learning and Cybernetics,5(5), 729742.
Rehman, A., Khan, M. A., Mehmood, Z., Saba, T., Sardaraz, M., &
Rashid, M. (2020). Microscopic melanoma detection and classification:
A framework of pixel-based fusion and multilevel features reduction.
Microscopy Research and Technique,83(4), 410423. https://doi.org/
10.1002/jemt.23429
Rehman, A., Khan, M. A., Saba, T., Mehmood, Z., Tariq, U., & Ayesha, N.
(2021). Microscopic brain tumor detection and classification using 3D
CNN and feature selection architecture. Microscopy Research and Tech-
nique,84(1), 133149. https://doi.org/10.1002/jemt.23597
Rehman, A., & Saba, T. (2011). Performance analysis of character segmen-
tation approach for cursive script recognition on benchmark database.
Digital Signal Processing,21(3), 486490. https://doi.org/10.1016/j.
dsp.2011.01.016
Rida, I., Herault, R., Marcialis, G. L., & Gasso, G. (2019). Palm print recogni-
tion with an efficient data driven ensemble classifier. Pattern Recogni-
tion Letters,126,2130.
Saba, T. (2020). Recent advancement in cancer detection using machine
learning: Systematic survey of decades, comparisons and challenges.
Journal of Infection and Public Health,13(9), 12741289.
Saba, T., Haseeb, K., Ahmed, I., & Rehman, A. (2020). Secure and energy-
efficient framework using Internet of Medical Things for e-healthcare.
Journal of Infection and Public Health,13(10), 15671575.
Saba, T., Rehman, A., Al-Dhelaan, A., & Al-Rodhaan, M. (2014a). Evaluation
of current documents image denoising techniques: a comparative
study. Applied Artificial Intelligence,28(9), 879887.
Saba, T., Rehman, A., Altameem, A., & Uddin, M. (2014b). Annotated com-
parisons of proposed preprocessing techniques for script recognition.
Neural Computing and Applications,25(6), 13371347. https://doi.org/
10.1007/s00521-014-1618-9
Saba, T., Rehman, A., Mehmood, Z., Kolivand, H., & Sharif, M. (2018).
Image enhancement and segmentation techniques for detection of
knee joint diseases: A survey. Current Medical Imaging,14(5),
704715.
Sharif, U., Mehmood, Z., Mahmood, T., Javid, M. A., Rehman, A., & Saba, T.
(2019). Scene analysis and search using local features and support vec-
tor machine for effective content-based image retrieval. Artificial Intel-
ligence Review,52(2), 901925.
Tamrakar, D., & Khanna, P. (2016). Noise and rotation invariant RDF
descriptor for palmprint identification. Multimedia Tools and Applica-
tions,75(10), 57775794.
Xu, Y., Fei, L., & Zhang, D. (2014). Combining left and right palm print
images for more accurate personal identification. IEEE Transactions on
Image Processing,24(2), 549559.
Yousuf, M., Mehmood, Z., Habib, H. A., Mehmood, T., Saba, T.,
Rehman, A., & Rashid, M. (2018). A Novel Technique Based on Visual
Words Fusion Analysis of Sparse Features for Effective Content-
Based Image Retrieval. Mathematical Problems in Engineering,2018,
2134395. https://doi.org/10.1155/2018/2134395
Zhang, S., Wang, H., Huang, W., & Zhang, C. (2018). Combining modified
LBP and weighted SRC for palm print recognition. Signal, Image and
Video Processing,12(6), 10351042.
How to cite this article: Rehman, A., Harouni, M., Karchegani,
N. H. S., Saba, T., Bahaj, S. A., & Roy, S. (2021). Identity
verification using palm print microscopic images based on
median robust extended local binary pattern features and
k-nearest neighbor classifier. Microscopy Research and
Technique,114. https://doi.org/10.1002/jemt.23989
14 REHMAN ET AL.
... The LoG transformation of a two-dimensional Gaussian function is shown in formula (1). 22 ...
... 222 (,)()((,))/ s IxyvIsy µσµσ =+−− (4) In formula (4), µ represents the local mean value; 2 σ represents the variance of the pixel neighborhood; 2 v represents the estimated mean variance of all pixels in the neighborhood. In histogram serialization processing, the Local Binary Pattern (LBP) operator is one of the main functions [22]. LBP operator is used to characterize the spatial structure of local texture of an image, measure and extract local texture information of the image. ...
Article
Full-text available
Solar panels, as devices for converting solar energy into electricity, have received widespread attention in their research and manufacturing. In the production process of solar panels, it is inevitable to have some defects, such as cracks on the surface of solar panels due to extrusion or damage due to quality issues. These defects will greatly affect the lifespan and photoelectric efficiency of the product. This article improves the Serre standard model, which can simulate the ventral visual pathway with object recognition ability, based on the latest research progress and results of simulating biological visual mechanism models in computer vision, to improve the recognition effect of surface defects on solar panels. At the same time, a pre-processing scheme combining Gaussian Laplace operator operator and adaptive Wiener filter to remove noise spots is studied, and the local Gabor Binary Pattern Histogram Sequence (LGBPHS) features are obtained through pre-processing. The Percolation-Based image processing method for detecting obvious cracks was used to determine the location of the algorithm and the calculation results based on the improved standard model method. It mainly refers to the MAX value output by the C2 layer and the classification and identification results of whether there are cracks, and the crack location function is completed. The experimental results show that the proposed method has an accuracy rate of 98.86% in training and 98.64% in testing, and both the false detection rate and the missed detection rate do not exceed 1%. Therefore, the method proposed in the study has a high accuracy and can effectively identify the surface defects of solar panels.
... Furthermore, that makes it stable for changes. However, since the wavelet transform computes convolutions with wavelet filters, the wavelet transform is unstable to changes [24,25]. To this end, a set of wavelet filters is needed to produce a descriptor with stable features against deformation, transmission, scaling, direction, and dilation [26][27][28]. ...
Article
Full-text available
One of the most evident and meaningful feedback about people’s emotions is through facial expressions. Facial expression recognition is helpful in social networks, marketing, and intelligent education systems. The use of Deep Learning based methods in facial expression identification is widespread, but challenges such as computational complexity and low recognition rate plague these methods. Scatter Wavelet is a type of Deep Learning that extracts features from Gabor filters in a structure similar to convolutional neural networks. This paper presents a new facial expression recognition method based on wavelet scattering that identifies six states: anger, disgust, fear, happiness, sadness, and surprise. The proposed method is simulated using the JAFFE and CK+ databases. The recognition rate of the proposed method is 99.7%, which indicates the superiority of the proposed method in recognizing facial expressions.
... The development of Internet of Things (IoT) technology has enabled the connection of billions of devices and sensors, providing unprecedented convenience in people's lives and work [1]. However, as communication and data exchange between these devices increase, security issues in IoT systems have become increasingly prominent. ...
Article
Full-text available
In recent years, various smart devices based on IoT technology, such as smart homes, healthcare, detection, and logistics systems, have emerged. However, as the number of IoT-connected devices increases, securing the IoT is becoming increasingly challenging. To tackle the increasing security challenges caused by the proliferation of IoT devices, this research proposes an innovative method for IoT identity authentication. The method is based on an improved ring-learning with errors (R-LWE) algorithm, which encrypts and decrypts communication between devices and servers effectively using polynomial modular multiplication and modular addition operations. The main innovation of this study is the improvement of the traditional R-LWE algorithm, enhancing its efficiency and security. Experimental results demonstrated that, when compared to number theory-based algorithms and elliptic curve cryptography algorithms at a 256-bit security level, the enhanced algorithm achieves significant advantages. The improved algorithm encrypted 20 data points with an average runtime of only 3.6 ms, compared to 7.3 ms and 7.7 ms for the other algorithms. Similarly, decrypting the same amount of data had an average runtime of 2.9 ms, as opposed to 7.3 ms and 8 ms for the other algorithms. Additionally, the improved R-LWE algorithm had significant advantages in terms of communication and storage costs. Compared to the number theory-based algorithm, the R-LWE algorithm reduced communication and storage costs by 3 °C each, and compared to elliptic curve cryptography, it reduced them by 4 °C each. This achievement not only enhances the efficiency of encryption and decryption but also lowers the overall operational costs of the algorithm. The research has made significant strides in improving the security and efficiency of IoT device identity authentication by enhancing the R-LWE algorithm. This study provides theoretical and practical foundations for the development and application of related technologies, as well as new solutions for IoT security.
... In crop images, color features cannot fully describe the image, so texture features are also added to the feature extraction. In which, a method combining gray level co-occurrence matrix (GLCM) with local binary pattern (LBP) is put forward (Bazikar et al., 2022;Rehman et al., 2022). GLCM is a second-order combined conditional probability density function of an image, which describes the relative frequency of different gray level pixels appearing again in the window. ...
Article
Full-text available
Research on the classification and identification of crop diseases and pests can help farmers quickly prevent crop diseases and pests. A crop disease and pest identification model based on adaptive chaotic particle swarm optimization algorithm is raised. The model introduces swarm intelligence algorithm to optimize the features of image extraction. Then the adaptive inertia weight is used to improve the optimization performance of PSO, and the support vector is used to accurately classify crop pests and diseases. Finally, the model is trained by simulation experiment to evaluate the performance of the model and analyze the performance. The model has a good performance in the experiment, the model has a clear recognition effect in the color feature extraction of pests and diseases, and the recognition accuracy is 95.08% after combining the texture feature. Moreover, in the visual transformation of 20¡ã-40¡ã, the recognition accuracy of the model is above 90%. In practical application, the average accuracy of the model is 91.78%, which is 3.71% higher than that of the comparison algorithm. In comparison experiments, the classification accuracy of the proposed models is above 90%. The experimental outcomes denote that the proposed algorithm has good effectiveness in identifying crop diseases and pests. chaotic particle swarm; crops; diseases; characteristics; image
... The image brightness is excessively increased by processing with CLAHE algorithm, which leads to poor contrast and excessive brightness of the image. To solve this problem, the CLAHE algorithm with the addition of the gamma function was introduced [20]. The gamma function is a grayscale transform function that describes the luminance of a pixel and its numerical mapping, and is applied to an image to correct the image luminance and contrast [21]. ...
Article
Full-text available
Due to the refraction, reflection and absorption of light induced by seawater and suspended particles, the image recognition of underwater vehicle has the problem of low accuracy and difficult localization, and to solve this one problem, an improved LSTM classification algorithm (CH-LSTM) based on CLAHE and HOG is given. First, the improved CLAHE algorithm is used to enhance image by image noise removal and de-fogging, the problems of underwater image distortion, fine lines, and abrupt changes are solved in the aquaculture waters image. Then, the Histogram of Orientation Gradient (HOG) operator is used for feature extraction to describe the shape of sea cucumbers and generate feature vectors; finally, the LSTM algorithm is used to classify the feature vectors and avoid overfitting through memory gates, so the generalization ability of the model was enhanced. The identification and sorting experiments of underwater fishing sea cucumbers show that the proposed algorithm is superior, especially in the environment of poor water quality, and the identification and localization accuracy of sea cucumbers is improved compared with the traditional ground SVM and BP neural network algorithms, and the identification accuracy is above 95.28%.
... This contrasts with SDR multimedia technologies that can store and display only a fraction of the scene's visual information and, therefore, fails to provide an immersed sense of reality or visual sensations similar to the real world [27,29]. Furthermore, due to more information in HDR content, the efficacy of computer vision based detection, classification and recognition tasks can also be improved [19,21,38,40]. ...
Article
Full-text available
Affective computing is an important research area for developing an emotion recognition system. In such systems, stimuli for emotion elicitation are standard dynamic range (SDR) multimedia content. This study uses high dynamic range (HDR) multimedia content that effectively presents the scene’s inherent colors and high dynamic range of luminance as stimuli for emotion elicitation, and its impact on human emotional experience with different personality traits is explored. For this purpose, four low and high-valence multimedia clips in SDR and HDR versions were shown to sixty subjects. Their emotional experience was recorded on a valence-arousal plane, which is then statistically analyzed in terms of the type of content and human personality traits. The results of the t-test with a 95% confidence interval show that HDR multimedia content elicits better emotions than SDR multimedia content and can be effectively utilized in emotion recognition systems for better emotion elicitation.
... The method may have many great applications because there is a growing need for more secure biometric procedures. This technology offers greater spatial resolution and imaging depth compared to current IR and ultrasound-based palm-vessel imaging approaches (Rehman et al. 2022). Our method uses 3D structures instead of PAI-based fingerprint biometric technologies. ...
Article
Full-text available
The aim of this research is to enhance the accuracy of biometric palm print identification by using the Novel ResNet50 Algorithm as compared to the X Gradient Boosting. Materials and Methods: In this study, the ResNet50 and X Gradient Boosting algorithms were compared using a sample size of 10 for each algorithm, resulting in a total sample size of 20. The comparison was carried out with a G Power of 0.8 and a confidence interval (CI) of 95% to ensure statistical significance. For this study the Birjand University Mobile Palmprint Database (BMPD) dataset was collected from the Kaggle repository, which includes a total of 1640 images containing both left and right-hand palmprints. Result: According to the results, the ResNet50 algorithm achieved a higher accuracy rate (94.7%) compared to the X Gradient Boosting algorithm (92.4%) in identifying and measuring the images. The statistical analysis indicated a significant difference between the Novel ResNet50 algorithm and X Gradient Boosting, with a pvalue of 0.003 (Independent sample T-test p<0.05). This suggests that the ResNet50 algorithm outperformed the X Gradient Boosting algorithm in this experiment. According to the study’s findings, ResNet50 is more effective in accurately identifying biometric palm prints compared to X Gradient Boosting.
... In this paper, face detection and recognition challenge has been investigated in close shots. Face detection is now one of the most important research areas that has been considered by many researchers and its applications include security control of individuals, access control of criminals and reconstruction of the face [9]. Facial features significantly impact human interactions and show an immediate sense of the individual's moods [10]. ...
Article
Full-text available
Face detection and recognition in abrupt dynamic images is still challenging due to high complexity of images. To tackle this issue, we employed Gray-Level Co-occurrence Matrix (GLCM) to convert large video into smaller consequential sections containing sequence information from a series of images. GLCM is a matrix associated with the relationship between the values of adjacent pixels in an image. The proposed method is composed of two stages. First, the video is taken as input using the histogram difference method. Features are extracted using co-occurrence matrix of images, statistical methods, and the border of sudden shots extracted from the video. Second, face recognition with the Viola-Jones algorithm is performed on the sudden shots extracted in the first step. Thus, the face is extracted by video data mining in output in close shots. In this method, we compared the parameter model in three windows (3, 5 and 7) and threshold limit for detecting abrupt cuts between values (0.1, 0.5, 1.5, 1.5 and 2) for each window. The highest percentage of face detection is attained by considering the maximum percentage of abrupt cuts in the 5×5 window with ...
... SML helps surgeons better access and understand real-time cautions and offer suggestions during the surgical process. The DL-based method could be beneficial in rendering the location and target for the best clinical and surgical practice with better accuracy [38]. However, these AI/SML robots require further authentication to accomplish the best practice. ...
Article
Full-text available
The global healthcare sector continues to grow rapidly and is reflected as one of the fastest-growing sectors in the fourth industrial revolution (4.0). The majority of the healthcare industry still uses labor-intensive, time-consuming, and error-prone traditional, manual, and manpower-based methods. This review addresses the current paradigm, the potential for new scientific discoveries, the technological state of preparation, the potential for supervised machine learning (SML) prospects in various healthcare sectors, and ethical issues. The effectiveness and potential for innovation of disease diagnosis, personalized medicine, clinical trials, non-invasive image analysis, drug discovery, patient care services, remote patient monitoring, hospital data, and nanotechnology in various learning-based automation in healthcare along with the requirement for explainable artificial intelligence (AI) in healthcare are evaluated. In order to understand the potential architecture of non-invasive treatment, a thorough study of medical imaging analysis from a technical point of view is presented. This study also represents new thinking and developments that will push the boundaries and increase the opportunity for healthcare through AI and SML in the near future. Nowadays, SML-based applications require a lot of data quality awareness as healthcare is data-heavy, and knowledge management is paramount. Nowadays, SML in biomedical and healthcare developments needs skills, quality data consciousness for data-intensive study, and a knowledge-centric health management system. As a result, the merits, demerits, and precautions need to take ethics and the other effects of AI and SML into consideration. The overall insight in this paper will help researchers in academia and industry to understand and address the future research that needs to be discussed on SML in the healthcare and biomedical sectors.
Chapter
Full-text available
Tuberculosis is a major health threat in many regions of the world. Opportunistic infections in immunocompromised HIV/AIDS patients and multi-drugresistant bacterial strains have exacerbated the problem, while diagnosing tuberculosis remains challenging. Medical images have made a high impact on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. This chapter presents a novel X-ray of lungs segmentation method using the U-net model. First, we construct the U-net which combine the lungs and mask. Then, we convert to problem of positive and negative TB lungs into the segmentation of lungs, and extract the lungs by subtracting the chest from the radiography. In experiment, the proposed model achieves 97.62% on the public dataset of collection by Shenzhen Hospital, China and Montgomery County X-ray Set.
Article
Full-text available
Cancer is a fatal illness often caused by genetic disorder aggregation and a variety of pathological changes. Cancerous cells are abnormal areas often growing in any part of human body that are life-threatening. Cancer also known as tumor must be quickly and correctly detected in the initial stage to identify what might be beneficial for its cure. Even though modality has different considerations, such as complicated history, improper diagnostics and treatement that are main causes of deaths. The aim of the research is to analyze, review, categorize and address the current developments of human body cancer detection using machine learning techniques for breast, brain, lung, liver, skin cancer leukemia. The study highlights how cancer diagnosis, cure process is assisted using machine learning with supervised, unsupervised and deep learning techniques. Several state of art techniques are categorized under the same cluster and results are compared on benchmark datasets from accuracy, sensitivity, specificity, false-positive metrics. Finally, challenges are also highlighted for possible future work.
Article
Full-text available
In various fields, the internet of things (IoT) gains a lot of popularity due to its autonomous sensors operations with the least cost. In medical and healthcare applications, the IoT devices develop an ecosystem to sense the medical conditions of the patients' such as blood pressure, oxygen level, heartbeat, temperature, etc. and take appropriate actions on an emergency basis. Using it, the healthcare-related data of patients is transmitted towards the remote users and medical centers for post-analysis. Different solutions have been proposed using Wireless Body Area Network (WBAN) to monitor the medical status of the patients based on low powered biosensor nodes, however, preventing increased energy consumption and communication costs are demanding and interesting problems. The issue of unbalanced energy consumption between biosensor nodes degrades the timely delivery of the patient's information to remote centers and gives a negative impact on the medical system. Moreover, the sensitive data of the patient is transmitting over the insecure Internet and prone to vulnerable security threats. Therefore, data privacy and integrity from malicious traffic are another challenging research issue for medical applications. This research article aims to a proposed secure and energy-efficient framework using Internet of Medical Things (IoMT) for e-healthcare (SEF-IoMT), which primary objective is to decrease the communication overhead and energy consumption between biosensors while transmitting the healthcare data on a convenient manner, and the other hand, it also secures the medical data of the patients against unauthentic and malicious nodes to improve the network privacy and integrity. The simulated results exhibit that the proposed framework improves the performance of medical systems for network throughput by 18%, packets loss rate by 44%, end-to-end delay by 26%, energy consumption by 29%, and link breaches by 48% than other states of the art solutions.
Article
Full-text available
With an overwhelming increase in the demand of autonomous systems, especially in the applications related to intelligent robotics and visual surveillance, come stringent accuracy requirements for complex object recognition. A system that maintains its performance against a change in the object’s nature is said to be sustainable; and it has become a major area of research for the computer vision research community in the past few years. In this work, we present a sustainable deep learning architecture, which utilizes multi-layer deep feature fusion and selection, for accurate object classification. The proposed approach comprises three steps: 1) By utilizing two deep learning architectures, Very Deep Convolutional Networks for Large-Scale Image Recognition and Inception V3, it extracts features based on transfer learning, 2) Fusion of all the extracted feature vectors is performed by means of a parallel maximum covariance approach, and 3) The best features are selected using Multi Logistic Regression controlled Entropy-Variances method. For verification of the robust selected features, the Ensemble Learning method named Subspace Discriminant Analysis is utilized as a fitness function. The experimental process is conducted using three publicly available datasets, including Caltech-101, Birds database, Butterflies database, and CIFAR-100, and a ten-fold validation process yields the best accuracies of 95.5%, 100%, 98%, and 68.80% for the datasets respectively. Based on the detailed statistical analysis and comparison with the existing methods, the proposed selection method gives significantly more accuracy. Moreover, the computational time of the proposed selection method is better for real-time implementation.
Article
Full-text available
An entity's existence in an image can be depicted by the activity instantiation vector from a group of neurons (called capsule). Recently, multi-layered capsules, called CapsNet, have proven to be state-of-the-art for image classification tasks. This research utilizes the prowess of this algorithm to detect pneumonia from chest X-ray (CXR) images. Here, an entity in the CXR image can help determine if the patient (whose CXR is used) is suffering from pneumonia or not. A simple model of capsules (also known as Simple CapsNet) has provided results comparable to best Deep Learning models that had been used earlier. Subsequently, a combination of convolutions and capsules is used to obtain two models that outperform all models previously proposed. These models-Integration of convolutions with capsules (ICC) and Ensemble of convolutions with capsules (ECC)-detect pneumonia with a test accuracy of 95.33% and 95.90%, respectively. The latter model is studied in detail to obtain a variant called EnCC, where n = 3, 4, 8, 16. Here, the E4CC model works optimally and gives test accuracy of 96.36%. All these models had been trained, validated, and tested on 5857 images from Mendeley.
Article
Full-text available
Alzheimer’s disease (AD) is an incurable neurodegenerative disorder accounting for 70%–80% dementia cases worldwide. Although, research on AD has increased in recent years, however, the complexity associated with brain structure and functions makes the early diagnosis of this disease a challenging task. Resting-state functional magnetic resonance imaging (rs-fMRI) is a neuroimaging technology that has been widely used to study the pathogenesis of neurodegenerative diseases. In literature, the computer-aided diagnosis of AD is limited to binary classification or diagnosis of AD and MCI stages. However, its applicability to diagnose multiple progressive stages of AD is relatively under-studied. This study explores the effectiveness of rs-fMRI for multi-class classification of AD and its associated stages including CN, SMC, EMCI, MCI, LMCI, and AD. A longitudinal cohort of resting-state fMRI of 138 subjects (25 CN, 25 SMC, 25 EMCI, 25 LMCI, 13 MCI, and 25 AD) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) is studied. To provide a better insight into deep learning approaches and their applications to AD classification, we investigate ResNet-18 architecture in detail. We consider the training of the network from scratch by using single-channel input as well as performed transfer learning with and without fine-tuning using an extended network architecture. We experimented with residual neural networks to perform AD classification task and compared it with former research in this domain. The performance of the models is evaluated using precision, recall, f1-measure, AUC and ROC curves. We found that our networks were able to significantly classify the subjects. We achieved improved results with our fine-tuned model for all the AD stages with an accuracy of 100%, 96.85%, 97.38%, 97.43%, 97.40% and 98.01% for CN, SMC, EMCI, LMCI, MCI, and AD respectively. However, in terms of overall performance, we achieved state-of-the-art results with an average accuracy of 97.92% and 97.88% for off-the-shelf and fine-tuned models respectively. The Analysis of results indicate that classification and prediction of neurodegenerative brain disorders such as AD using functional magnetic resonance imaging and advanced deep learning methods is promising for clinical decision making and have the potential to assist in early diagnosis of AD and its associated stages.
Article
Soft biometric information, such as gender, iris, and voice, can be helpful in various applications, such as security, authentication, and validation. Iris is secure biometrics with low forgery and error rates due to its highly certain features are being used in the last few decades. Iris recognition could be used both independently and in part for secure recognition and authentication systems. Existing iris-based gender classification techniques have low accuracy rates as well as high computational complexity. Accordingly, this paper presents an authentication approach through gender classification from iris images using support vector machine (SVM) that has an excellent response to sustained changes using the Zernike, Legendre invariant moments, and Gradient-oriented histogram. In this study, invariant moments are used as feature extraction from iris images. After extracting these descriptors' attributes, the attributes are categorized through keycode fusion. SVM is employed for gender classification using a fused feature vector. The proposed approach is evaluated on the CVBL data set and results are compared in state of the art based on local binary patterns and Gabor filters. The proposed approach came out with 98% gender classification rate with low computational complexity that could be used as an authentication measure.
Article
Brain tumor is one of the most dreadful natures of cancer and caused a huge number of deaths among kids and adults from the past few years. According to WHO standard, the 700,000 humans are being with a brain tumor and around 86,000 are diagnosed since 2019. While the total number of deaths due to brain tumors is 16,830 since 2019 and the average survival rate is 35%. Therefore, automated techniques are needed to grade brain tumors precisely from MRI scans. In this work, a new deep learning‐based method is proposed for microscopic brain tumor detection and tumor type classification. A 3D convolutional neural network (CNN) architecture is designed at the first step to extract brain tumor and extracted tumors are passed to a pretrained CNN model for feature extraction. The extracted features are transferred to the correlation‐based selection method and as the output, the best features are selected. These selected features are validated through feed‐forward neural network for final classification. Three BraTS datasets 2015, 2017, and 2018 are utilized for experiments, validation, and accomplished an accuracy of 98.32, 96.97, and 92.67%, respectively. A comparison with existing techniques shows the proposed design yields comparable accuracy.
Article
The numbers of diagnosed patients by melanoma are drastic and contribute more deaths annually among young peoples. An approximately 192,310 new cases of skin cancer are diagnosed in 2019, which shows the importance of automated systems for the diagnosis process. Accordingly, this article presents an automated method for skin lesions detection and recognition using pixel‐based seed segmented images fusion and multilevel features reduction. The proposed method involves four key steps: (a) mean‐based function is implemented and fed input to top‐hat and bottom‐hat filters which later fused for contrast stretching, (b) seed region growing and graph‐cut method‐based lesion segmentation and fused both segmented lesions through pixel‐based fusion, (c) multilevel features such as histogram oriented gradient (HOG), speeded up robust features (SURF), and color are extracted and simple concatenation is performed, and (d) finally variance precise entropy‐based features reduction and classification through SVM via cubic kernel function. Two different experiments are performed for the evaluation of this method. The segmentation performance is evaluated on PH2, ISBI2016, and ISIC2017 with an accuracy of 95.86, 94.79, and 94.92%, respectively. The classification performance is evaluated on PH2 and ISBI2016 dataset with an accuracy of 98.20 and 95.42%, respectively. The results of the proposed automated systems are outstanding as compared to the current techniques reported in state of art, which demonstrate the validity of the proposed method. Current research proposed a new auto system for skin lesions detection, recognition using pixel‐based seed segmented images fusion, and multilevel features reduction. Finally, using a SVM via cubic kernel functions, skin lesions are classified.
Article
Human action recognition (HAR) has gained much attention in the last few years due to its enormous applications including human activity monitoring, robotics, visual surveillance, to name but a few. Most of the previously proposed HAR systems have focused on using hand-crafted images features. However, these features cover limited aspects of the problem and show performance degradation on a large and complex datasets. Therefore, in this work, we propose a novel HAR system which is based on the fusion of conventional hand-crafted features using histogram of oriented gradients (HoG) and deep features. Initially, human silhouette is extracted with the help of saliency-based method - implemented in two phases. In the first phase, motion and geometric features are extracted from the selected channel, whilst, second phase calculates the Chi-square distance between the extracted and threshold-based minimum distance features. Afterwards, extracted deep CNN and hand-crafted features are fused to generate a resultant vector. Moreover, to cope with the curse of dimensionality, an entropy-based feature selection technique is also proposed to identify the most discriminant features for classification using multi-class support vector machine (M-SVM). All the simulations are performed on five publicly available benchmark datasets including Weizmann, UCF11 (YouTube), UCF Sports, IXMAS, and UT-Interaction. A comparative evaluation is also presented to show that our proposed model achieves superior performances in comparison to a few exiting methods.
Article
Background and objective: Brain tumor occurs because of anomalous development of cells. It is one of the major reasons of death in adults around the globe. Millions of deaths can be prevented through early detection of brain tumor. Earlier brain tumor detection using Magnetic Resonance Imaging (MRI) may increase patient's survival rate. In MRI, tumor is shown more clearly that helps in the process of further treatment. This work aims to detect tumor at an early phase. Methods: In this manuscript, Weiner filter with different wavelet bands is used to de-noise and enhance the input slices. Subsets of tumor pixels are found with Potential Field (PF) clustering. Furthermore, global threshold and different mathematical morphology operations are used to isolate the tumor region in Fluid Attenuated Inversion Recovery (Flair) and T2 MRI. For accurate classification, Local Binary Pattern (LBP) and Gabor Wavelet Transform (GWT) features are fused. Results: The proposed approach is evaluated in terms of peak signal to noise ratio (PSNR), mean squared error (MSE) and structured similarity index (SSIM) yielding results as 76.38, 0.037 and 0.98 on T2 and 76.2, 0.039 and 0.98 on Flair respectively. The segmentation results have been evaluated based on pixels, individual features and fused features. At pixels level, the comparison of proposed approach is done with ground truth slices and also validated in terms of foreground (FG) pixels, background (BG) pixels, error region (ER) and pixel quality (Q). The approach achieved 0.93 FG and 0.98 BG precision and 0.010 ER on a local dataset. On multimodal brain tumor segmentation challenge dataset BRATS 2013, 0.93 FG and 0.99 BG precision and 0.005 ER are acquired. Similarly on BRATS 2015, 0.97 FG and 0.98 BG precision and 0.015 ER are obtained. In terms of quality, the average Q value and deviation are 0.88 and 0.017. At the fused feature based level, specificity, sensitivity, accuracy, area under the curve (AUC) and dice similarity coefficient (DSC) are 1.00, 0.92, 0.93, 0.96 and 0.96 on BRATS 2013, 0.90, 1.00, 0.97, 0.98 and 0.98 on BRATS 2015 and 0.90, 0.91, 0.90, 0.77 and 0.95 on local dataset respectively. Conclusion: The presented approach outperformed as compared to existing approaches.