Conference PaperPDF Available

CVBL IRIS Gender Classification Database Image Processing and Biometric Research, Computer Vision and Biometric Laboratory (CVBL)

Authors:

Abstract

Iris recognition has been an interesting subject for many research studies in the last two decades and has raised many challenges for the researchers. One new and interesting challenge in the iris studies is gender recognition using iris images. Gender classification can be applied to reduce processing time of the identification process. On the other hand, it can be used in applications such as access control systems, and gender-based marketing and so on. To the best of our knowledge, only a few numbers of studies are conducted on gender recognition through analysis of iris images. Considering the importance of this research area and its commercial applications, it is highly essential for researchers to make use of efficient color features in their algorithms which necessitates the production of color iris image databases. The present study introduces an iris image database for gender classification and proposes a new gender classification algorithm for its evaluation. The database consists of iris images taken from 720 subjects including 370 females and 350 males in university students. For each student, more than 6 images were taken from his/her both left and right eyes. After examining the images, 3 images from the left eye and 3 images from the right eye were selected among the most appropriate images and were included in the database. All 4320 images from this database were taken under the same condition and by the same color camera. Finally, the quality and the efficiency of the introduced database are evaluated using a new method that extract Zernike moments on spectral features and two well-known classifiers, namely, SVM and KNN. The results revealed that there is a significant improvement in gender classification compared with the similar databases.
CVBL_IRIS Gender Classification Database
Image Processing and Biometric Research, Computer Vision And Biometric Laboratory (CVBL)
Saeed Aryanmehr
Computer Department
Islamic Azad University, Isfahan (Khorasgan) Branch
Isfahan, Iran
e-mail: Saeed.aryanmehr@khuisf.ac.ir
Mohsen Karimi
Electric Department
Islamic Azad University, Dolatabad Branch
Isfahan, Iran
e-mail: mohsenkarimi.ht@iauda.ac.ir
Farsad Zamani Boroujeni
Computer Department
Islamic Azad University, Isfahan (Khorasgan) Branch Isfahan, Iran
e-mail: f.zamani@khuisf.ac.ir
AbstractIris recognition has been an interesting subject for
many research studies in the last two decades and has raised
many challenges for the researchers. One new and interesting
challenge in the iris studies is gender recognition using iris
images. Gender classification can be applied to reduce
processing time of the identification process. On the other hand,
it can be used in applications such as access control systems, and
gender-based marketing and so on. To the best of our
knowledge, only a few numbers of studies are conducted on
gender recognition through analysis of iris images. Considering
the importance of this research area and its commercial
applications, it is highly essential for researchers to make use of
efficient color features in their algorithms which necessitates the
production of color iris image databases. The present study
introduces an iris image database for gender classification and
proposes a new gender classification algorithm for its
evaluation. The database consists of iris images taken from 720
subjects including 370 females and 350 males in university
students. For each student, more than 6 images were taken from
his/her both left and right eyes. After examining the images, 3
images from the left eye and 3 images from the right eye were
selected among the most appropriate images and were included
in the database. All 4320 images from this database were taken
under the same condition and by the same color camera. Finally,
the quality and the efficiency of the introduced database are
evaluated using a new method that extract Zernike moments on
spectral features and two well-known classifiers, namely, SVM
and KNN. The results revealed that there is a significant
improvement in gender classification compared with the similar
databases.
Keywords-iris; gender classification; texture spectral feature;
zernike moment
I. INTRODUCTION
Gender classification using biometric features is an
interesting topic in biometric identification that attract many
researchers around the world. There are some research
studies in the literature on gender recognition, however most
of them rely only on visual features extracted from gray-level
images. Comprehensive studies on all important aspects of
gender classification problem require appropriate database
for pre-processing, proper feature extraction, and training
classifiers. The resulting algorithms would be more efficient
and more suitable for industry. Among different biometrics
applied in gender recognition such as face, iris, voice, and
other similar biometrics, the iris patterns are more robust to
be forged providing a highly reliable authentication tool for
important and sensitive systems. In recent years, there have
been a vast amount of research studies to produce efficient
algorithms and databases to recognize the gender by analysis
of iris images. Nevertheless, there are a few image databases
for gender recognition on iris. Among them two widely used
databases are UND_V [1] and GFI [2] that will be described
in the following section. Images in both UND_V and GFI
databases was taken in the infrared spectrum which require
expensive equipment. The UND_V database has 1944
images from 324 subjects, 3 images were taken from both left
and right eyes. Also, the database is collected from subjects
of both genders, i.e. 175 males and 149 female subjects. The
GFI database has 3000 images from 1500 subjects (750 males
and 750 females) and both left and right eyes were acquired
and saved in the database. Despite many advantages of these
two databases, there are some shortcomings as well. Firstly,
both databases were produced by infrared cameras and the
algorithms that process this type of images do not benefit
from color features of the iris. Secondly, although UND_V
database has sufficient left and right eye images, it has been
collected from a small population which provides insufficient
examples to train the algorithms in real world applications.
Although GFI database has sufficient examples, there is only
one image taken from each eye in the database, which makes
it unreliable such that there is no alternative if the quality of
an image is not appropriate for a particular application.
However, thanks to the databases produced so far, there are
some comprehensive research studies with interesting
findings.
Research and studies on gender recognition and
classification using iris images were first introduced by
Thomas in 2007 [3]. Nevertheless, there are just a few studies
conducted on gender recognition and classification using iris
images [1]-[5]. In most of the research studies, one similar
procedure is observed which mainly consists of image
acquisition, pre-processing, feature extraction, and finally
classification. The method proposed by Thomas et al. relies
mainly on iris segmentation algorithm proposed by Daugman
[6]. The authors of [3] were the first researchers who
proposed gender classification based on iris images. They
examined 5000 left-eye irises. In their proposed method, 7
geometric features were chosen from iris and cornea. Then,
they used a random tree classification and achieved 80%
accuracy. The studies were conducted only on the left eyes
which limits the applicability of their database compared with
those include the iris images of the both eyes. Two years later,
Hollingworth et al. [7] proved that in the iris images that are
segmented by Daugman method, pixels contain different
amount of information. In- other word, all segments of the
iris do not possess the same information and usually the
intermediate bands own more information. Generally,
identifying bands with more information could lead to the
higher speed and accuracy of the recognition process.
Followed by this study, the method suggested in [8] used
statistical and spectral features extracted by the wavelet
transform followed by applying support vector machine for
iris recognition which resulted in 83.3% of accuracy. They
extracted the information from the whole segments of the iris
which causes a remarkable increase in the processing time.
The total of 300 images were used in their study that were
taken from both eyes, one image per eye. Also, the total
number of samples were 150 subjects including 50 females
and 100 males.
The method presented in [1] suggests the use of local
binary pattern (LBP) for gender recognition and found great
improvement in the gender recognition. In their proposed
method, different versions of LBP were examined on the iris
images and resulted in 91% accuracy. Tapia et al. [2] found
that some extracted bands include further information for
gender recognition than others. To select the most
informative bands, the mutual information technique was
used and resulted in 89% of accuracy. In their proposed
method, the most informative bands were selected from 20
bands and they found that by increasing the number of bands,
it is likely to find more informative bands. Recently, many
researchers have been interested in Zernike features, because
Zernike moments extract stronger features in images with
soft textures. Zernike moments proved to be efficient in palm
print detection [9], finger print identification [10], and face
recognition [11] which are common biometric methods used
for human identification. Thus, it is expected that features
extracted from moments near the center of iris serve as major
descriptors for human identification and gender recognition.
Thus, it is important to know about the properties of spectral
features extracted from the regions close to the center of the
iris. Considering the circular shape of the iris, this region has
a significant amount of texture with spectral features [2]. The
present study proposed a new Zernike moment-based feature
extraction that is able to improve the rate of gender
classification significantly. The present study is organized as
follows: section II introduces the CVBL_IRIS database in
details. Section III presents the proposed method based on
Zernike moments that extract spectral features from the
texture in the gender classification. Section IV compares the
results obtained from applying the proposed method on the
CVBL_IRIS and UND_V database and discusses the
findings. Finally, section V includes the findings and presents
the results and conclusions.
II. CVBL_IRIS DATABASE
The iris images in the existing databases are acquired
using infrared cameras which only produce gray-level
images. The present study proposed a new database
containing color iris images for human identification and
gender classification. The proposed database was produced
in 2017 and the images were taken by Canon D550 with 18-
55 lens. In order to prevent the reflection of light from the
iris, a flash ring, MEIKE MK-FC110 [13], was used. The
flash was installed on the lens to improve the quality of image
and for enhancing the salient patterns of the iris. Figure 1
shows the steps of installing the flash ring on the camera.
Figure 1. The MEIKE MK-FC110 flash ring.
The resolution of the acquiring images was originally
51843456. Then, the images were resized to 320 280 for
final processes. This pre-processing step was done by the
ImageJ-1/43-U software [14] with which iris region was
extracted from the images. The database consisted of 4320
samples from 720 subjects. The subjects were selected from
university students of Islamic Azad University, Isfahan
(Khorasgan) branch. For each subject six images were taken;
3 images from the left eye and 3 others from the right eye.
The number of male and female subjects was almost equal.
The age range of the sample population was from 18 to 50
years old but the majority of the population ware selected
from students with 18 to 25 years old. Table I shows the detail
of comparison between the existing databases and the
proposed database for gender recognition.
As it is shown in Table I, the sample population and the
quality of images is similar to the standard databases. Figure
2 shows three iris images taken from the right eye of a female
subject and Figure 3 shows that of a male subject after the
cropping step.
TABLE I. SUMMARY OF THE PROPOSED DATABASE AND GFI AND
UND_V
Database
UND_V
GFI
Our
Database
NIR
NIR
VIS
Image type
324
1500
720
No. of subjects
6
2
6
No. of images per
subject
3
1
3
No. of images per
eye
175 Men,
149Women
750 Men, 750
Women
350 Men,
370 Women
No of subjects per
gender
640x480
640x480
640x580
Resolution
TIFF
TIFF
JPG
Format
Yes
Yes
Yes
Available
Figure 2. Three left eye iris images taken from a female subject
Figure 3. Three right eye iris images taken from a male subject
All the images in this database were acquired in the
similar conditions and the distance of lens with the subjects
1
https://ieee-dataport.org/documents/cvbl-iris-super-resolution-dataset
eye was determined and controlled by a tripod and a fixing
instrument. Figure 4 shows how the imaging instrument was
used.
The images of the proposed database are accessible in
IEEE Dataport
1
. All the data files are labeled by a very simple
rule to make the searching and querying the images easy and
straight forward. Table II presents the naming rule
accompanied by an example.
Figure 4. The imaging equipment used for acquiring CVBL_IRIS
database
TABLE II. IMAGE FILE NAME DESCRIPTION
ID.
Subject
Orientation
ID. eye
ID
LE(Left)
01 to 03
RI(Right)
01 to 03
Example file name :122.GM.LE.00
Male
Left eyes
ID. eye
Figure 5. The example of naming the images in the CVBL database
III. THE EVALUATION ALGORITHM
The spectral features of iris texture include huge amount
of information that can be used to increase the accuracy of
identification. Texture information is intuitively represented
the degree of smoothness, greatness, and homogeneity. There
are two types of texture features including spatial domain
features and frequency domain features. To describe the
texture three main methods can be examined:
Statistical Methods
Spectral Methods
Structural Methods
The mean of standard deviation, smoothness, third
moment homogeneity, and entropy are of statistical
moments. Spectral features involve frequency domain
transforms such as Fourier transform or multi-resolution
transforms, e.g. wavelet transform. Consequently, the
recognition and interpretation of the spectral features become
simpler by expressing spectrum in the form of polar
coordinates.
Almost all of the identification methods in different
biometric domains, use a set of particular features either in
spatial or frequency domain. This fact holds for gender
classification through iris images as well. The main objective
of the present study is to propose a new database for gender
classification as well as introducing a method for evaluating
gender classification on this new database. The proposed
algorithm called SP_ZR is applied on UND_V and
CVBL_IRIS databases in order to compare the results and the
quality of them. In the proposed method, a texture spectral
feature called Zernike Moments were used. In the following
section the basic concepts of the spectral feature extractor and
Zernike moments are presented and elaborated.
A. The Basic Concepts of Zernike and Spectral Texture
Zernike features are highly popular because they can
extract strong features from highly delicate textures. By
considering that, iris has delicate and coherent textures with
huge variety of patterns. Zernike moments proved to be
efficient feature in the finger print images, palm line, palm
vessels, and face images. Thus, it is expected that Zernike
moments are able to efficiently extract useful features for
gender recognition. Also, Zernike moments have no
information redundancy because the basic functions are
orthogonal. Therefore, in comparison with other features, the
computational load of the Zernike moments is reduced as a
result of significant reduction in the data. Furthermore, the
Zernike moments are independent of rotation and translation
and are robust to noise. The core of Zernike moments are the
Zernike multi-sentences. Two-dimensional moments from
rank p and frequency q are on the image in polar coordinate
(r,σ). Which can be expressed as follows (1):
 
  



 (1)
where    (2)
 is equal to the multi-sentenced true part (3). In equation
(3) p> q and p-q are even.
  





(3)
IV. AN EXPLANATION OF THE PROPOSED METHOD
The present paper introduces a new iris database for
gender recognation. It also proposes a new method based on
Zernike moment-based feature extraction which extract the
feature from texture spectral features called SP_ZR. Figure 6
shows the block diagram of the implemantation steps of this
algorithm.
In the proposed method, first the images are divided into
training and testing partitions in both CVBL_IRIS and
UND_V databases. From each three existing images of an
eye, two images were used for training and one was used for
testing. In the next step, that the Daugman algorithm is used
for iris segmentation. Figure 7 shows the original image, the
segmented region and normalized image for a sample iris
image in the CVBL-IRIS database. Similarly, Figure 8 shows
the cropped image, the segmented and normalized iris image
in the UND_V database. After the segmentation step, the
Zernike moment was extracted from segmented iris image.
The final produced feature vectors were divided into two
classes of male and female proving the training and testing
samples fort SVM and KNN classifiers.
Pre-processing
Features
Extraction
Classifi
cation segmen
tation
Corp imageCorp image
Resize Image and
rgb2gray
Segmentation
&
Normalization
Extract Zernike
features
Extract texture
features
Segmentation
&
Normalization
Resize Image and
rgb2gray
Test Image
Extract Zernike
features
Image acquisition
Extract texture
features
Train Data
Features
database
SVM
&
KNN
Gender
classification
Figure 6. Block diagram of proposed method
a b
c
Figure 7. a) a sample of cropped image , b) segmented image, c)
normalized image in the CVBL-IRIS database
V. RESULTS AND DISCUSSION
The proposed method, SP_ZR was implemented in the
CVBL_IRIS database. It first extracts the spectral features
from the iris texture. Then the coefficients near the spectral
features of the texture are computed. To evaluate the
proposed method, the identification rate was used as the
evaluation measure to prove the superiority of the
CVBL_IRIS database compared with UND_V database.
Table III show the results from applying the proposed method
on the CVBL_IRIS_ and UND_V by the KNN classifier.
a b
c
Figure 8. a) a sample of cropped image , b) segmented image, c)
normalized image in the CVBL-IRIS database
TABLE III. THE COMPARISON OF IDENTIFICATION RATE OF THE
PROPOSED METHOD IN THE RIGHT AND LEFT EYE IN 1 TO 9TH ORDER
MOMENT WITH KNN CLASSIFIER
Order
CVBL
Right
CVBL
Left
UND_V
Right
UND_V Left
1
78%
68%
59%
60%
2
77%
78%
63%
64%
3
80%
79%
66%
68%
4
85%
82%
71%
69%
5
83%
83%
70%
71%
6
81%
85%
74%
72%
7
80%
88%
70%
73%
8
80%
86%
70%
71%
9
82%
82%
71%
73%
In this phase the first order to nine order moments were
tested. As shown in Table III, the best results were obtained
in 6th and 7th order moments for the right eyes of CVBL_IRIS
with the identification rate of 88% and 85%, respectively.
While, for UND_V database, the identification rate for the 6th
order moment is about 74% for the left eye. Also, the
identification rate of 73% was obtained in the 9th order
moment for the left eye in the UND_V database.
Table IV shows the comparison of recognition rate of the
proposed method in left and right eye in the moment order
from first to 9th by the SVM classifier in both databases.
The proposed method in the 9th order moment in the left
eye with the identification rate of 77% was the best result.
However, the same identification rate for the right eye was
75%. After applying the proposed method, SP_ZR, on the
UND_V database, in the right eye, the best results were 71%
and 69% for the 9th order moment.
The findings revealed that the proposed method, which is
based on extracting Zernike moment-based features,
performs much better on the CVBL_IRIS than the UND_V
database. It is apparent that the presence of dominate features
of the recorded textures in this database (CVBL_IRIS)
justifies its superiority. Although the computation time
increases as the order of Zernike moment is increased, it
seems that improvement of the identification rate can
compensate the increase in computation time.
TABLE IV. THE COMPARISON OF IDENTIFICATION RATE OF THE
PROPOSED METHOD IN THE RIGHT AND LEFT EYE IN FIRST TO 9TH ORDER
MOMENT WITH SVM CLASSIFIER
Order
CVBL
Right
CVBL
Left
UND_V
Right
UND_V
Right
1
68%
61%
58%
59%
2
69%
66%
61%
63%
3
68%
70%
63%
61%
4
69%
70%
62%
59%
5
69%
69%
65%
63%
6
73%
73%
57%
60%
7
70%
74%
69%
68%
8
73%
76%
70%
72%
9
77%
76%
71%
69%
VI. CONCLOSION
The present study proposes a new database called
CVBL_IRIS for gender classification based on iris images.
To evaluate the proposed database, a new feature extraction-
based method called SP_ZR based on extracting Zernike
moments from sepctral features was proposed and applied on
the CVBL_IRIS database. To evaluate the proposed method
and to compare the databases, the proposed method was
applied on the UND_V database as well. The results of
identification rate show that the extracted features are more
discriminative in the proposed database compared with its
counterpart. Furthermore, using our database, the future
studies can use color features for iris recognition and gender
classification.
REFERENCES
[1] J. E. Tapia, C. A. Perez, and K. W. Bowyer, “Gender classification
from iris images using fusion of uniform local binary patterns,” in
European Conference on Computer Vision, 2014, pp. 751763.
[2] J. E. Tapia, C. A. Perez, and K. W. Bowyer, “Gender Classification
From the Same Iris Code Used for Recognition,” IEEE Trans. Inf.
Forensics Secur., vol. 11, no. 8, pp. 17601770, 2016.
[3] V. Thomas, N. V Chawla, K. W. Bowyer, and P. J. Flynn, “Learning
to predict gender from iris images,” in Biometrics: Theory,
Applications, and Systems, 2007. BTAS 2007. First IEEE International
Conference on, 2007, pp. 15.
[4] A. Bansal, R. Agarwal, and R. K. Sharma, “SVM based gender
classification using iris images,” in Computational Intelligence and
Communication Networks (CICN), 2012 Fourth International
Conference on, 2012, pp. 425429.
[5] S. Lagree and K. W. Bowyer, “Predicting ethnicity and gender from
iris texture,” in Technologies for Homeland Security (HST), 2011
IEEE International Conference on, 2011, pp. 440445.
[6] J. Daugman, “How iris recognition works,” IEEE Trans. circuits Syst.
video Technol., vol. 14, no. 1, pp. 2130, 2004.R. Nicole, “Title of
paper with only first word capitalized,” J. Name Stand. Abbrev., in
press.
[7] K. P. Hollingsworth, K. W. Bowyer, and P. J. Flynn, “The best bits in
an iris code,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 6,
pp. 964973, 2009.
[8] A. Bansal, R. Agarwal, and R. K. Sharma, “SVM based gender
classification using iris images,” in Computational Intelligence and
Communication Networks (CICN), 2012 Fourth International
Conference on, 2012, pp. 425429.
[9] S. Kahyaei and M. S. Moin, “Robust matching of fingerprints using
pseudo-Zernike moments,” 2016 4th Int. Conf. Control.
Instrumentation, Autom. ICCIA 2016, no. January, pp. 116120, 2016.
[10] J. Svoboda, J. Masci, and M. M. Bronstein, “Palmprint recognition via
discriminative index learning,” Proc. - Int. Conf. Pattern Recognit., pp.
42324237, 2017.
[11] W. L. Yang and L. L. Wang, “Research of palmprint identification
method using Zernike moment and neural network,” Proc. - 2010 6th
Int. Conf. Nat. Comput. ICNC 2010, vol. 3, no. Icnc, pp. 13101313,
2010.
[12] D. N. Satange, A. Alsubari, and R. J. Ramteke, “Composite Feature
Extraction based on Gabor and Zernike Moments for Face
Recognition,” IOSR J. Comput. Eng., no. August, pp. 2278–661, 2017.
[13] “Meike FC-110, Meike FC-110 LED Macro Ring Flash Light FC110
for Canon EOS Nikon Pentax Olympus camera | MeiKe Store.”
[Online]. Available: http://www.meikestore.com/product/meike-fc-
110-meike-fc-110-led-macro-ring-flash-light-fc110-for-canon-eos-
nikon-pentax-olympus-camera/2094.html. [Accessed: 21-Dec-2017].
[14] “Download.” [Online]. Available:
https://imagej.nih.gov/ij/download.html. [Accessed: 21-Dec-2017].
... This section first investigates the wavelet scattering transform's effect on the classifiers' performance in the training and testing phases. For this purpose, the two sets of feature samples are used for training and testing the same classifier; the first feature set includes the deep features extracted from the raw image information and the second set includes the features obtained from applying the wavelet scattering transform on the same image data in the CVBL dataset [4]. The training and testing split comprised 1200 and 600 normalized iris images, respectively. ...
... The CVBL dataset [4] was introduced in 2017. The resolution of its original images is 5184×3456, which is reduced to 280×320 before the preprocessing step. ...
... The present study compares the performance of its method against that of a counterpart method by employing the CVBL dataset introduced in [4] and the UTIRIS dataset [14]. With the expectation that the proposed method is comparable with previously published deep learning solutions, it is compared with three deep learning methods recently introduced as gender-fromiris tasks [17,20,26,29,42]. ...
Article
Full-text available
Recognition of gender from iris images can be considered a texture classification task in which a classification model discriminates iris textures of male and female subjects. Although many researchers have proposed efficient iris texture classification methods that rely on deep features or employ Fourier and wavelet transforms, several issues have still been reported in the literature. On the one hand, it is difficult to discriminate the details of iris textures using the features extracted by traditional frequency domain transforms. On the other hand, in different imaging conditions, small changes in pupil diameter or head rotations result in the translation and deformation of the iris texture and inaccurate classification results. To overcome these challenges, the current study proposes an approach that employs a feature extraction method based on a wavelet scattering transform comparable with deep features extracted from raw image data using convolutional neural networks. In the proposed method, the scattering coefficients are extracted from each RGB channel, followed by applying the principal component analysis (PCA) to reduce the extracted features’ dimensionality. These features are used to train a convolutional neural network. The current paper compares the deep feature vectors extracted from raw RGB images against features obtained from the wavelet scattering transform. This comparison is made according to the performance results obtained from a fine-tuned multi-layer perceptron (MLP) model trained by both feature sets. Experiments conducted on CVBL and UTIRIS databases indicate that using a wavelet scattering transform and extracting second-order features can significantly enhance the performance of the iris-based gender classification in comparison to deep features achieved from applying a deep neural network to raw pixel information. Moreover, our feature extraction method provides learnable features, thus eliminating the need for an additional training step to obtain deep features, as performed in the most recent state-of-the-art methods.
... However, it is worth noting that this bodily state can be susceptible to manipulation or imitation (Schuller and Schuller, 2021). Facial expressions and their alterations are commonly utilized for emotion recognition; however, these expressions can be intentionally modified by individuals, posing challenges in accurately discerning their genuine emotions (Aryanmehr et al., 2018;Dzedzickis et al., 2020;Harouni et al., 2022). EEG (electroencephalography) is a technique employed to monitor brain activity through the measurement of voltage changes generated by the collective neural activity within the brain (San-Segundo et al., Dehghani et al., 2020Dehghani et al., , 2022Dehghani et al., , 2023Sadjadi et al., 2021;Mosayebi et al., 2022). ...
Article
Full-text available
Introduction Emotions play a critical role in human communication, exerting a significant influence on brain function and behavior. One effective method of observing and analyzing these emotions is through electroencephalography (EEG) signals. Although numerous studies have been dedicated to emotion recognition (ER) using EEG signals, achieving improved accuracy in recognition remains a challenging task. To address this challenge, this paper presents a deep-learning approach for ER using EEG signals. Background ER is a dynamic field of research with diverse practical applications in healthcare, human-computer interaction, and affective computing. In ER studies, EEG signals are frequently employed as they offer a non-invasive and cost-effective means of measuring brain activity. Nevertheless, accurately identifying emotions from EEG signals poses a significant challenge due to the intricate and non-linear nature of these signals. Methods The present study proposes a novel approach for ER that encompasses multiple stages, including feature extraction, feature selection (FS) employing clustering, and classification using Dual-LSTM. To conduct the experiments, the DEAP dataset was employed, wherein a clustering technique was applied to Hurst’s view and statistical features during the FS phase. Ultimately, Dual-LSTM was employed for accurate ER. Results The proposed method achieved a remarkable accuracy of 97.5% in accurately classifying emotions across four categories: arousal, valence, liking/disliking, dominance, and familiarity. This high level of accuracy serves as strong evidence for the effectiveness of the deep-learning approach to emotion recognition (ER) utilizing EEG signals. Conclusion The deep-learning approach proposed in this paper has shown promising results in emotion recognition using EEG signals. This method can be useful in various applications, such as developing more effective therapies for individuals with mood disorders or improving human-computer interaction by allowing machines to respond more intelligently to users’ emotional states. However, further research is needed to validate the proposed method on larger datasets and to investigate its applicability to real-world scenarios.
... Additionally, a comparative investigation of offthe-self textural descriptors and human ability in gender classification was undertaken. Moreover, Aryanmehr et al. (2018), with Computer Vision and Biometric Laboratory (CVBL) introduced image processing and biometric research using the IRIS gender identification database. They created an iris image database for gender classification and evaluated it using a new gender classification algorithm. ...
Article
Full-text available
Gender classification is attractive in a range of applications, including surveillance and monitoring, corporate profiling, and human-computer interaction. Individuals' identities may be gleaned from information about their gender, which is a kind of soft biometric. Over the years, several methods for determining a person's gender have been devised. Some of the most well-known ones are based on physical characteristics like face, fingerprint, palmprint, DNA, ears, gait, and iris. On the other hand, facial features account for the vast majority of gender classification methods. Also, the iris is a significant biometric trait, because the iris, according to research, remains basically constant during an individual's life. Besides that, the iris is externally visible and is non-invasive to the user, which is important for practical applications. Furthermore, there are already high-quality methods for segmenting and encoding iris images, and the current methods facilitate selecting and extracting attribute vectors from iris textures. This study discusses several approaches to determining gender. The previous works of literature are briefly reviewed. Additionally, there are a variety of methodologies for different steps of gender classification. This study provides researchers with knowledge and analysis of the existing gender classification approaches. Also, it will assist researchers who are interested in this specific area, as well as highlight the gaps and challenges in the field, and finally provide suggestions and future paths for improvement.
Article
Full-text available
In forensic investigations, characteristics such as gender, age, ethnic origin, and height are important in determining biological identity. In this study, we developed a deep learning-based decision support system for gender recognition from wrist radiographs using 13,935 images collected from individuals aged between 2 and 79 years. Differences in all regions of the images, such as carpal bones, radius, ulna bones, epiphysis, cortex, and medulla, were utilized. A hybrid model was proposed for gender determination from X-ray images, in which deep metrics were combined in appropriate layers of transfer learning methods. Although gender determination from X-ray images obtained from different countries has been reported in the literature, no such study has been conducted in Turkey. It was found that gender discrimination yielded different results for males and females. Gender identification was found to be more successful in females aged between 10 and 40 years than in males. However, for age ranges of 2-10 and 40-79 years, gender discrimination was found to be more successful in males. Finally, heat maps of the regions focused on by the proposed model were obtained from the images, and it was found that the areas of focus for gender discrimination were different between males and females.
Chapter
Full-text available
Tuberculosis is a major health threat in many regions of the world. Opportunistic infections in immunocompromised HIV/AIDS patients and multi-drugresistant bacterial strains have exacerbated the problem, while diagnosing tuberculosis remains challenging. Medical images have made a high impact on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. This chapter presents a novel X-ray of lungs segmentation method using the U-net model. First, we construct the U-net which combine the lungs and mask. Then, we convert to problem of positive and negative TB lungs into the segmentation of lungs, and extract the lungs by subtracting the chest from the radiography. In experiment, the proposed model achieves 97.62% on the public dataset of collection by Shenzhen Hospital, China and Montgomery County X-ray Set.
Chapter
Full-text available
Melanoma is one of the riskiest diseases that extensively influence the quality of life and can be dangerous or even fatal. Skin lesion classification methods faced challenges in varying scenarios. The available hand-crafted features could not generate better results when the skin lesion images contain low contrast, under and over-segmented images. The hand-crafted features for skin lesions did not discriminate well between the two significantly different densities. The pigmented network feature vector and deep feature vector have been fused using a parallel fusion method to increase classification accuracy. This optimized fused feature vector has been fed to machine learning classifiers that accurately classify the dermoscopic images into two categories as benign and malignant melanoma. The statistical performance measures were used to assess the proposed fused feature vector on three skin lesion datasets (ISBI 2016, ISIC 2017, and PH2). The proposed fused feature vector accurately classified the skin lesion with the highest accuracy of 99.8% for the ISBI 2016.
Chapter
Brain disfunction is very common in old age and even in middle-aged people. Alzheimer’s and Parkinson’s diseases are among the most common diseases due to brain disfunction. In Alzheimer’s disease, a person gradually loses his mental abilities. Although it is normal for people to become a little forgetful as they get older, this memory disorder gradually progresses, posing great challenges. In order to prevent the spread of Alzheimer’s disease, early detection will be very helpful. Parkinson’s is another disease that will increase in prevalence as life expectancy increases. Brain monitoring tools are used to detect these diseases early. An inexpensive and useful tool and low-risk brain signals are electroencephalograms. In order to analyze brain signals, the use of machine learning-based methods has been able to show its superiority. In order to diagnose Alzheimer’s and Parkinson's in machine learning, there are preprocessing steps, feature extraction, feature selection, classification, and evaluation. Since electroencephalogram data have high repetition and correlation in different channels recorded on the head, feature extraction techniques will be of great importance. Feature selection methods seek to select the most effective features to classify and identify disease status. Finally, the selected features will be categorized using different categories. In this chapter, a complete overview of the stages of diagnosis of these diseases with the help of machine learning will be provided.KeywordsAlzheimerParkinsonElectroencephalogramMachine learningFeature extraction and selection
Chapter
Today, the Internet has become an integral part of people's lives, so that it is impossible for some people to imagine life without the Internet. With the spread of the Internet, and the diversity of Internet use, a new type of Internet use called the Internet of Things (IoT) has emerged. In the Internet of Things, information is collected, managed, and communicated in the daily life of man through the Internet. In this chapter, an improved method Low power and Lossy Network is proposed to control and monitor the Alzheimer's patient in the cloud robot on the Internet of Things in smart homes. In the proposed method, load balancing in the routing protocol in LLN networks based on RPL is presented. The proposed method improves the structure of the P2P (pair-to-pair) path. Data packets are sent as RPL sorted and irregular. Paths sent in P2P mode have been improved to reduce computational overhead and balance load on the network. Elimination of control messages and load balancing in routing are among the advantages of the proposed method. The proposed method is compared with the RPL base car, and experiments based on a test environment will be performed in NS2 software. The results show that the ENRPL proposed in this study performs better in P2P communications and improves path construction in P2P and data transmission. It also affects the efficiency of MP2P point-to-point transmission of data. The results of the proposed method show that the proposed method is much more effective for sending data and creating load balance. The proposed ENRPL method, based on the removal and reduction of control messages, also shows a significant improvement over conventional RPL.KeywordsIOMTHealthLow power and Lossy networkAlzheimer
Chapter
Health monitoring in humans is very important. This monitoring can be done in different people from embryonic period to adulthood. A healthy fetal will lead to a healthy baby. For this purpose, health assessment methods are used from the fetal to adulthood. One of the most common methods of assessing health at different times is to use clinical signs and data. Measuring heart rate, blood pressure, temperature, and other symptoms can help monitor health. However, there are usually errors in human predictions. Data mining is a technique for identifying and diagnosing diseases, categorizing patients in disease management, and finding patterns to diagnose patients more quickly and prevent complications. It could be a great help. Increasing the accuracy of diagnosis, reducing costs, and reducing human resources in the medical sector have been proven by researchers as the benefits of introducing data mining in medical analysis. In this paper, data mining methods will be introduced to diagnose and monitor health in individuals with various heart diseases. Heart disease will be evaluated to make the study more comprehensive, including fetal health diagnosis, arrhythmias, and machine learning data mining angiography. Attempts are made to introduce the relevant database in each disease and to evaluate the desired methods in health monitoring. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
Conference Paper
Full-text available
This paper presents the experimental evaluation of Gabor filter and Zernike moments for extracting the face features. The dimensionality of the input image is reduced for the overloading process of Gabor filters. 40 sub-images were obtained from the original images by using the Gabor filters in 5 scales and 8 orientations. From each sub-image, four Zernike features were extracted. Thus, the total numbers of features are 160. The k-Nearest Neighbor (k-NN) classifier is used for the matching purpose. The experiments were performed in the ORL and NC-Face database of Facial Expressions. The recognition rate in the ORL database is 98.5% and the rate in the NC-Face database of Facial Expression is 89.23%. In the proposed system, the performance was found to be satisfactory as compared to the existing system.
Conference Paper
Full-text available
This paper is concerned in analyzing iris texture in order to determine “soft biometric”, attributes of a person, rather than identity. In particular, this paper is concerned with predicting the gender of a person based on analysis of features of the iris texture. Previous researchers have explored various approaches for predicting the gender of a person based on iris texture. We explore using different implementations of Local Binary Patterns from the iris image using the masked information. Uniform LBP with concatenated histograms significantly improves accuracy of gender prediction relative to using the whole iris image. Using a subject-disjoint test set, we are able to achieve over 91 % correct gender prediction using the texture of the iris. To our knowledge, this is the highest accuracy yet achieved for predicting gender from iris texture.
Conference Paper
Fingerprint recognition is a type of physiological biometrics. In this paper, a system is proposed for fingerprints matching. The proposed system contains three phases. In the first phase, which is preprocessing, background removal and contrast enhancement are performed. In the second phase, a series of features are extracted from the fingerprint patches using pseudo-Zernike moments. Finally, in the third phase, which is the recognition phase, the fingerprint matching is done using the Euclidean distance between the input samples and the stored templates. The proposed system is invariant to the size, translation and rotation of fingerprints, and is accurate and fast. The proposed system has been evaluated on two data sets of FVC 2004 and FVC 2006. It can be observed that the proposed method is more accurate than similar methods.
Article
Previous researchers have explored various approaches for predicting the gender of a person based on the features of the iris texture. This paper is the first to predict gender directly from the same binary iris code that could be used for recognition. We found that the information for gender prediction is distributed across the iris, rather than localized in particular concentric bands. We also found that using selected features representing a subset of the iris region achieves better accuracy than using features representing the whole iris region. We used the measures of mutual information to guide the selection of bits from the iris code to use as features in gender prediction. Using this approach, with a person-disjoint training and testing evaluation, we were able to achieve 89% correct gender prediction using the fusion of the best features of iris code from the left and right eyes.
Conference Paper
These days biometric authentication systems based on human characteristics such as face, finger, voice and iris are becoming popular among researchers. These systems identify an individual as an authentic or an imposter using a database of enrolled individuals. These systems do not provide other information about imposter such as her gender or ethnicity. Various researchers have utilized facial images for gender classification. Iris images have also been used for identification but there exists a very few references reporting the identification of human attributes such as gender with the help of iris images. In this paper gender has been identified using iris images. Statistical features and texture features using wavelets have been extracted from iris images. A classification model based on Support Vector Machine (SVM) has been developed to classify gender and an accuracy of 83.06% has been achieved in this work.
Conference Paper
Previous researchers have reported success in predicting ethnicity and in predicting gender from features of the iris texture. This paper is the first to consider both problems using similar experimental approaches. Contributions of this work include greater accuracy than previous work on predicting ethnicity from iris texture, empirical evidence that suggests that gender prediction is harder than ethnicity prediction, and empirical evidence that ethnicity prediction is more difficult for females than for males.
Conference Paper
Having thoroughly researched the existing palm print identification technology, in this paper, we propose a hierarchical multi-feature scheme to facilitate coarse-to-fine matching for efficient and effective palm print recognition. In our approach, first of all, we define two levels of feature: geometry feature based on distance (level-1 feature) and texture feature based on Zernike moment (level- 2 feature). Then we adopt two different kinds of neural network for different features, and then combine the two into one recognition system effectively. Finally, the experimental results demonstrate the feasibility and efficiency of the proposed system.
Article
Iris biometric systems apply filters to iris images to extract information about iris texture. Daugman's approach maps the filter output to a binary iris code. The fractional Hamming distance between two iris codes is computed and decisions about the identity of a person are based on the computed distance. The fractional Hamming distance weights all bits in an iris code equally. However, not all the bits in an iris code are equally useful. Our research is the first to present experiments documenting that some bits are more consistent than others. Different regions of the iris are compared to evaluate their relative consistency, and contrary to some previous research, we find that the middle bands of the iris are more consistent than the inner bands. The inconsistent-bit phenomenon is evident across genders and different filter types. Possible causes of inconsistencies, such as segmentation, alignment issues, and different filters are investigated. The inconsistencies are largely due to the coarse quantization of the phase response. Masking iris code bits corresponding to complex filter responses near the axes of the complex plane improves the separation between the match and nonmatch Hamming distance distributions.
Conference Paper
This paper employs machine learning techniques to develop models that predict gender based on the iris texture features. While there is a large body of research that explores biometrics as a means of verifying identity, there has been very little work done to determine if biometric measures can be used to determine specific human attributes. If it is possible to discover such attributes, they would be useful in situations where a biometric system fails to identify an individual that has not been enrolled, yet still needs to be identified. The iris was selected as the biometric to analyze for two major reasons: (1) quality methods have already been developed to segment and encode an iris image, (2) current iris encoding methods are conducive to selecting and extracting attributes from an iris texture and creating a meaningful feature vector.