Content uploaded by Tamer Shanableh
Author content
All content in this area was uploaded by Tamer Shanableh on Nov 07, 2016
Content may be subject to copyright.
Content uploaded by Tamer Shanableh
Author content
All content in this area was uploaded by Tamer Shanableh on Nov 07, 2016
Content may be subject to copyright.
Gait Recognition System Tailored for Arab Costume of The Gulf Region
Tamer Shanableh
Department of computer
science and Engineering
American University of
Sharjah
tshanableh@aus.edu
Khaled Assaleh
Department of Electrical
Engineering
American University of
Sharjah
kassaleh@aus.edu
Layla Al – Hajjaj
Computer Science
program
American university of
Sharjah
g00021907@aus.edu
AbdulWahab Kabani
Computer Science
program
American university of
Sharjah
b00020950@aus.edu
Abstract –
Existing work on gait recognition is focused on casual
(western) customs hence not suitable for the Gulf region where long gowns
are used for both men and women. This paper proposes a gait recognition
solution that is suitable for both Gulf customs and casual customs. The
solution is based on computing an adaptive image prediction between
consecutive images. The resultant predictions are then accumulated into one
image and transformed using either Discrete Cosine Transformation
(DCT) or Radon transformation. The feature vectors of the gait are
computed from such transformed images. Feature modeling based on
polynomial networks follows. The proposed solution is tested on a dataset
with around 100 participants with mixed genders and mixed customs. The
proposed system yields an impressive classification rates approaching 100%
accuracy.
Keywords –
Human identification; computer vision; motion analysis;
gait biometric
1. INTRODUCTION
In biometrics, people are identified based on their
characteristics such as voice, iris, fingerprint, hand geometry
and face. It has been reported that such identification can also
be based on the way that a human walks [1]. Such a biometric is
referred to as Gait. Basically video cameras are used to acquire
video sequences of individuals and recognize them based on the
way they walk. Gait recognition has a number of attractive
characteristics when compared to existing biometrics [2]. For
example it does not require a physical contact as required by
fingerprint or hand recognition. It also does not require high
image resolution or special image acquisition conditions as
required by face recognition for instance. Lastly it is non-
intrusive and can recognize people at a distance without their
knowledge or direct involvement.
In 2005, a research group from the University of South Florida
issued a human gait recognition challenge [3]. The group
compiled a dataset of video sequences with different covariates
such as camera viewing angle, walking surface type, carrying
conditions where a person can be carrying a briefcase for
example, shoe type where walking in heals for instance will
affect the gait and the video capturing time. In the latter most
video sequences were acquired in a second round after six
months of the first shooting. The dataset contains data for
experiments of increasing difficulty levels. A total of 1,870
video sequences are acquired with a total of 122 participants.
The dataset is made available and it is used for benchmarking
new solutions in gait recognition.
However, this dataset and subsequently all the previously
proposed solutions are based on western style of dress code.
Such solutions are likely to fail when applied to Gulf style dress
code including white/black robes, head gears/scarves and veils.
This is so because of the nature of the feature extraction
methods that exploit the gait cycle which depends on the
movements of the legs. .
This paper proposes an efficient feature extraction and
classification scheme for gait recognition for the Arab costume
in the Gulf region. The proposed scheme is also shown to work
for western costume as well.
In general gait recognition based on video sequences is divided
into a number of steps:
1. Segmentation: this step entails identifying the pixel locations
belonging to the subject to be identified. The segmented images
are binarized resulting in what is known as “silhouette frames”.
One approach for this segmentation is though background
modeling and separation. For instance, in [3] it was proposed to
extract bounding boxes of the subjects and then compute the
mean vector and covariance matrix of background pixels. The
pixels of the bounding box containing the subject are then
classified into either foreground or background using
Mahalanobis distances from the background model. The
distances are then classified into foreground or background
based on their likelihoods which are estimated using an
Expectation Minimization (EM) procedure.
Variants of this segmentation algorithm are also reported in the
literature. For instance, in [5] the pixels’ Mahalanobis distances
from the background model were thresholded into either
foreground or background without the need for computing
likelihoods through the EM algorithm. Other approaches
include extracting principal components of silhouette boundary
vector variations [6] or Fourier descriptors [7].
2. Feature extraction: This step can be preceded by what is
known as gait cycle estimation which is the set of images
starting from the right heel touching the ground all the way to
where it touches the ground again. This information can be
used to segment the sequences into cycles and then align them
using various techniques such as Population Hidden Markov
Models (PHMMs) [4]. Features are then based on averaged
subsequences [8]. Other feature extraction techniques are
reported for characterizing gait dynamics such as stride, stride
speed, length and cadence [9]. Others used static body
information such as ratios of various body parts [9]. Feature
vectors composed of the amplitude of the spectrum of key
silhouette frames are also reported in [2]. More recently
dynamic features from averaged silhouette cycles are extracted
by Gabor-based discriminative common vectors (DCV) analysis
[10]. Likewise [11] proposed the use of Kernel-based Principal
Component Analysis (KPCA) to extract gait features. In other
approaches, the human body components are studied separately
and feature vectors are extracted accordingly [12].
3. Feature modeling and similarity measures: Here the extracted
features are compared against the stored entries in the dataset.
Reported measures include computing the Euclidean distances
in the Linear Discriminate Analysis (LDA) space [4, 13],
symmetric group theoretic distances [14], normalized Euclidean
distance between the projection centroids of two gait sequences
[15]. Dynamics of the gait sequences can also be modeled by
hidden Markov models (HMMs) as reported in [16].
The existing work on gait recognition is however based on
identifying people in western or casual costumes namely; pants
and shorts. Such solutions are not suitable for identifying
individuals in the Gulf region of the Middle East. The local
dress code in the Gulf region for males includes robes
and head gears. Likewise the dress code for females includes
robes and head scarves or face veils. Examples of such customs
are shown in Figure 1. It is clear that the gap between the legs
of the individuals are concealed hence all the techniques based
on gait cycles do not apply. Note that this problem can also be
present if the individuals are dressed in long skirts as well;
hence the problem is not specific for the Gulf customs. We
propose a solution that applies to both casual and Gulf customs
based on accumulating the adaptive prediction errors of
consecutive images are shall be explained in Section 3.
The rest of this paper is organized as follows. The compiled
dataset and data acquisition procedure is described in Section 2.
Feature extraction and motion representation for causal and
Gulf customs are presented in Section 3. The classification
problem is then formalized using polynomial networks in
Section 4. Experimental results are presented and discussed in 5
prior to arriving to the final conclusion in section 6.
2. DATASET DESCRIPTION
Although the purpose of this research is to devise a method for
gait recognition for individuals in Gulf costumes nonetheless,
we need to verify that the proposed solution is also applicable
to recognizing individuals in casual costumes (mainly with pants
or shorts). As such the same system can be deployed for
identifying individuals with mixed costumes.
Similar to the setup reported in [3] the camera was positioned
10 meters away from the walking subjects. However one digital
camera was used with one view only. The video capturing took
place in the rotunda of one of our lecture building using one
digital camera. An example of participants with different
costumes is shown in Figure 1.
(a)
(b)
(c)
(d)
Fig. 1. Example participants with different costumes. (a) Female
with Gulf costume (b) Male with Gulf costume (c) Female with
casual costume (d) Male with casual costume.
A total number of 103 subjects participated in the data
collection. All participants are undergraduate students of the
same age group between 18 and 22 year old. Out of the 103
subjects, 53 participated with Gulf costumes (33 females and 20
males). Another 50 subjects participated in casual costumes (11
females and 39 males).
Each participant was asked to walk naturally across the rotunda
back and forth a total number of 8 times. Out of which 4
instances are captured with a walk from right to left and 4 in the
other direction.
3. FEATURE EXTRACTION
The existing literature on Gait recognition heavily depends on
the extraction of gait cycles. Such a cycle can be defined as the
sequence of images from which the right heel of an individual
touched the ground all the way until it touches the ground
again. The extraction of gait cycles depend on the observing the
g
a
c
e
s
a
b
e
a
p
W
a
c
b
e
t
h
s
u
c
o
o
b
t
h
p
r
b
e
b
i
O
m
I
n
u
s
p
r
is
p
r
d
i
i
n
p
e
t
h
i
n
P
r
i
n
c
o
t
h
d
i
s
e
c
a
N
a
i
n
m
i
m
p
r
b
a
o
f
t
h
Th
r
e
fi
g
n
o
a
p between the
e
rtain position
a
me position.
U
e
tween the le
g
p
proach to feat
u
W
e propose t
o
c
cumulate it in
t
e
extracted fro
m
h
at in the da
t
u
bjects are w
a
o
ntains variou
s
b
ject extractio
n
h
e other hand,
r
eprocessing
s
e
longing to t
h
i
narized result
i
O
ne approach
f
m
odeling and se
p
n
this paper w
e
s
ed in digital
v
r
ediction error
subtracted fr
o
r
ediction erro
r
i
fferences tha
t
n
dividual. The
e
rcentile of no
n
h
resholded pre
d
n
to one imag
e
r
edictions (A
P
n
dividual’s mot
i
o
nsecutive ima
g
h
e positive d
i
i
fferences. Ea
c
e
parately. In t
h
a
n be referred t
N
ote that the co
relative motio
n
n
the AP ima
g
m
otion we pro
p
m
ages in comp
u
r
ediction in th
i
a
ckward predic
f
forward or b
a
h
e Sum of Abs
o
h
e prediction
s
e
sult of imple
m
g
ure shows th
a
o
w minimized
a
two legs in a
v
of the legs an
d
U
nfortunately,
g
s is not at
u
re extraction
s
o
extract the
t
o one or two
i
m
such image
s
t
aset descripti
o
a
lking in fron
s
stationary o
b
n
and segment
a
in the absenc
e
s
hall entail i
d
e subject. Th
e
ng in what is
f
or this segm
e
p
aration as des
c
e
base our feat
u
v
ideo coding
w
between succe
o
m its immedia
r
can be th
r
t
did not re
s
threshold can
n
-zero pixels o
f
d
iction error i
m
e
which we
P
) image. Fo
r
i
on, the forwa
r
g
es can be rep
r
i
fferences an
d
c
h prediction
e
h
is case we en
d
o as positive A
v
ered and unc
o
n
to the indivi
d
g
es.
T
o mini
m
p
ose to use th
e
u
ting the predi
c
i
s case is refer
r
tion respective
l
a
ckward predic
o
lute Differenc
s
ource that mi
n
m
enting this tec
h
a
t the appearan
c
a
s desired.
(
v
ideo sequence
d
ends after a
f
with the Gulf
all apparent
h
s
hall be sought
.
motion of a
n
i
mages. Featur
e
s
that describe
o
n it was m
e
t of a static
b
jects. Hence
t
a
tion is not ne
e
e
of a stationa
r
d
entifying the
e
segmented i
m
known as “s
i
e
ntation is thr
o
c
ribed above [
3
u
re extraction
o
w
here we co
m
s
sive images.
T
te previous im
a
r
esholded to
s
ult from the
be set to th
e
f
the predictio
n
m
ages can the
n
refer to as
t
r
better repre
r
d prediction
e
r
esented using
2
d
the other
f
e
rror image is
d
up with two
P
and negative
o
vered backgro
d
ual and thus r
e
m
ize the appe
a
e
previous im
a
c
tion error for
a
r
ed to as forw
a
l
y.
T
he decisio
n
tion can be ba
s
e
s (SAD) of th
e
n
imizes the SA
D
h
nique is show
n
c
e of the back
g
(
a)
that starts fro
m
f
ull cycle with
costume, the
g
h
ence a diffe
r
.
n
individual
a
e
vectors can t
h
the motion.
N
e
ntioned that
background
t
t
he preprocess
e
ded this case.
r
y background
pixel locati
o
m
ages are usu
i
lhouette fram
e
o
ugh backgro
u
3
]..
o
n the techni
q
m
pute the for
w
T
hat is, each
i
m
a
ge. The resul
t
filter out im
motion of
e
50
th
or the
7
n
error image.
T
n
be accumul
a
t
he Accumul
a
sentation of
e
rror between
t
2
images. One
f
or the nega
t
then threshol
d
AP images w
h
AP images.
und will appea
r
e
presented as s
u
a
rance of rela
t
a
ges or the fu
t
a
given image.
T
a
rd prediction
a
n
between the
s
ed on compu
t
e
prediction er
r
D
is selected.
T
n
in Figure 2.
T
g
round objects
m
a
the
g
ap
r
ent
a
nd
h
en
N
ote
the
t
hat
of
On
the
o
ns
ally
e
s”.
u
nd
q
ues
w
ard
age
t
ant
age
the
7
5
th
T
he
a
ted
a
ted
the
t
wo
for
t
ive
d
ed
h
ich
r
as
u
ch
t
ive
t
ure
T
he
a
nd
use
t
ing
r
or.
T
he
T
he
are
Fi
g
Once t
h
spatial
d
languag
e
based
T
ransf
o
coeffici
e
transfo
r
energy
i
i
maged.
such l
o
quantiz
a
content
.
propos
e
coeffici
e
scannin
g
progres
s
scannin
g
coeffici
e
can be
s
T
he p
r
scannin
g
Note t
h
positive
process
A
s me
n
extracti
o
A
P im
a
dimens
i
the dire
done o
n
smooth
one di
m
i
deal lo
w
projecti
o
coeffici
e
Polyno
m
which
n
higher
sequen
c
main s
t
feature
i
mprov
i
expand
e
comput
i
g
. 2. AP image
s
forw
a
(a) Negative
A
h
e AP images
a
d
omain featur
e
e recognition
a
on either t
h
o
rm (DCT)
c
e
nts.
A
n
r
mation is its
i
s concentrate
d
This fact is
u
o
w frequency
a
tion step size
. Therefore i
n
e
to represent
e
nts. There co
e
g
manner st
a
sing inwards
t
g
process c
a
e
nts. This nu
m
s
elected empiri
c
r
ocess of D
C
g
is also know
n
h
at the zonal co
e
AP images. T
h
are then interl
e
n
tioned previ
o
o
n is based o
n
a
ges are projec
i
onal curve tha
t
e
ction of the p
r
n
either the
h
and reduce th
m
ensional DC
T
w
pass filtering
o
n can be
r
e
nts.
4.
m
ial network
p
n
onlinearly ex
p
dimensionalit
y
c
e. Training o
f
t
ages. The fir
s
vectors via
p
i
ng the separ
e
d feature v
e
t
ing the weigh
t
(b)
s
of a motion s
e
a
rd/backward
p
A
P image (b)
P
a
re computed,
e
s. Following
t
a
s reported in
h
e two dim
e
c
oefficients o
r
important
p
energy compa
d
in the top left
u
tilized in ima
g
coefficients a
r
in compariso
n
n
terms of sp
a
our AP imag
e
e
fficients can
b
a
rting from
t
towards the
b
a
n select a
m
ber is known
c
ally.
C
T transforma
t
n
as Zonal cod
i
o
ding is applied
h
e resultant ve
c
e
aved to gener
a
o
usly, the sec
o
n
Radon trans
f
c
ted at a given
t reflects the i
n
r
ojection angle.
h
orizontal or t
h
h
e size of the c
o
T
transformatio
with a given f
r
r
epresented u
CLASSIFIC
A
p
rovides a par
p
ands a seque
n
y
and maps
t
f
a polynomia
l
s
t stage involv
p
olynomial ex
p
r
ation of the
e
ctor space.
T
t
s of the poly
n
e
quence with a
d
p
rediction.
P
ositive AP im
a
the next step
t
he authors’
w
[
17
]
, these fea
t
e
nsional Disc
r
r
the Rado
n
p
roperty of
ction. Most o
f
t
corner of the
g
e and video c
o
r
e quantized
w
n
with the hi
g
a
tial feature e
x
e
using the t
o
b
e selected usi
n
t
he top left
b
ottom right
c
predefined
n
as the DCT
c
t
ion followed
i
ng.
to both negati
v
c
tors of the zo
n
a
te the final fe
a
o
nd approach
f
ormation. Es
s
angle; the re
s
n
tegral of pixel
T
ypically the
p
h
e vertical im
a
o
mputed imag
e
n can be used
r
equency cuto
ff
sing few lo
w
A
TION
r
ameterized no
n
n
ce of feature
v
t
hem to a ta
l
network con
s
v
es expanding
p
ansion with
different cla
s
T
he second s
n
omial networ
k
d
aptive
a
ge.
is to extract
w
ork on sing
t
ures can be
r
ete Cosine
n
transform
the DCT
f the image
transformed
o
ding where
w
ith a finer
g
h frequency
x
traction we
o
p left DCT
n
g a zig-zag
corner and
c
orner. The
n
umber of
c
utoff which
by zigzag-
v
e and
n
al coding
a
ture vector.
to feature
s
entially, the
s
ult is a one
lines across
p
rojection is
a
ge axis. To
e
projection,
followed by
ff
. Hence the
w
frequency
n
linear map
v
ectors to a
a
rget output
s
ists of two
the training
the aim of
s
ses in the
tage entails
k
applied to
the expanded feature vectors. Polynomial networks have been
used successfully in biomedical signal separation [18].
In a polynomial networks setting, the gait recognition problem
can be formulated as follows. The response variables which
represent the M individuals (where each individual can be
referred to a class in this case) of the training dataset, are
denoted by an M number of q vectors, i.e. ={| =
1,2,…,} . For a given class of features vectors, say class i, the
corresponding q vector will contain binary values with ‘1s’
indicating individuals belonging to class i and ‘0s’ for the rest
of the individuals or participants.
The feature vector of the ith individual at repetition j of class m
is composed of l feature variables and is denoted by
,=
[
,(0)
,(1)…
,()]. Consequently, the feature
vectors in the training set is denoted by the matrix X where:
,(0)
,(1)⋯
,()
,(0)
,(1)…
,()
⋮⋮⋮
,(0)
,(1)…
,()
(1)
We wish to perform a nonlinear mapping between the feature
vector matrix X and the response variables ={| =
1,2,…,}. In polynomial networks, the dimensionality of the
feature vectors in matrix X is first expanded into an rth order.
The dimensionality expansion can be achieved by a reduced
multivariate polynomial expansion as proposed in [16]. The
expansion of X into the rth order is denoted by the matrix P ∈
Rn x k where k is the dimensionality of the expanded feature
vector which is defined as [16]:
k = 1+r + l(2r-1) (2)
The mapping between P and is then achieved by using least-
squared error objective criterion:
=avg
min‖ −
‖ (3)
Where ‖.‖ denotes the l2 norm. Minimizing the objective
function results in:
=(
) (4)
Note that model weights are computed using a non-iterative
least squares method which is a clear advantage when it comes
to computational complexity.
Consequently the training process results in a set of weights
{
|=1,2,…,}. To classify a feature vector
representing the walk of an individual, we compute the inner
product of its expanded feature vector with each of the weight
vectors. This results in a score sequence sm m=1,2,..,M. The
class label of the feature vector is then determined by
argmax(s).
5. EXPERIMENTAL RESULTS
In the following results we validate the proposed feature
extraction schemes on Gulf costumes and compare the results
against those obtained on casual costumes which are similar to
what is reported in the literature. Common to all of the results
to follow, we report the gait classification rate obtained from
training and testing the system with feature vectors of different
lengths according to the DCT cutoff as explained in Section 3
above. Unless otherwise stated, the classification results are
obtained using a least-squares classifier without polynomial
expansion.
We start the experimental results section by comparing between
different approaches to spatial domain feature extraction. In
Figure 3, the female and male datasets are mixed, this also
includes mixing different directions of walking (i.e. from left to
right and vice versa). For each individual, 75% of the walking
samples are used for training and 25% for testing. Hence the
testing data is unseen by the training model. The figure shows
that the Radon transformation with horizontal projections of
AP images results in the highest classification results. Intuitively
this makes sense because the horizontal projection represents
the shape of the accumulated motion from both the front and
the rear of the body of an individual. At a DCT cutoff of 60
coefficients, the classification results are very close to 100%.
This should not come as a surprise as similar results have been
reported in [3]. On the other hand the figure shows that Radon
transformation with vertical projection of the AP images results
in a very poor classification result. This is so because such
projections can only describe the height of the individual and
the sinusoidal-like motion of the head during the walk. Clearly
such features are not enough for identifying an individual.
Interleaving the feature vectors of both aforementioned
projection though results in an acceptable result as shown in the
figure. Feature extraction using zonal coding resulted is a
moderate classification results. This can be justified by the fact
that the AP images contain plenty of high frequencies, hence
describing such images whilst discarding most of the high
frequency content though zonal coding does not result in
accurate and precise feature vectors. Lastly, it is worth
mentioning that the above discussion applies equally for both
Gulf and casual customs. However in the latter scenario, the
classification scores resulting from the horizontal projections
are a bit more accurate. This is not a surprise as the Gulf
costume conceals some details of the body motion.
(a)
0
0.2
0.4
0.6
0.8
1
10 20 30 40 50 60
Classification rate
Length of feature vector
Horizontal Projection Vertical projection
Interleaved projection Zonal coding
(b)
Fig. 3. Comparison between various spatial feature extraction
approaches. (a) Gulf costume (b) Casual costume.
In Figure 4 we interleave the two directions of walking in terms
of system training and testing. That is we train the system on
one direction of walking and test it on the other direction.
Clearly this experiment is carried out in a cross-validation
manner and the average classification result is reported. In this
experiment 50% of the feature vectors belong to each direction
of walking hence the training to testing ratio is set as such. The
spatial feature extraction approach is Radon transformation
with horizontal projections. Clearly, the classification rates
based on different direction of walking results is less accurate as
shown in the figure, nonetheless, at a DCT cutoff of 60 a
classification rate of around 90% is achieved. The figure also
shows that with training and testing based on second order
reduced model polynomial expansion higher classification
results are obtained. At low DCT cutoffs the enhancement is
quite evident. However at higher dimensionality and due to the
low number of training samples per individual (4 in this
experiment), the matrix of the expanded feature vector matrix
becomes ill-conditioned thus affecting the matrix inverse
operation in the computation of the model weights. Again the
same discussion applies for both the Gulf and the casual
costume.
(a)
(b)
Fig. 4.Classification rates using different training approaches
based on the direction of walking. (a) Gulf costume (b) Casual
costume.
6. CONCLUSION
This paper proposed a solution for gait recognition using non-
western costumes. In particular, the work was considered with
Gulf customs for both genders. The proposed solution was also
tested on casual costumes and was shown to work as well. As
such the same system can be deployed for identifying
individuals with mixed customs without the need for a
customized solution for a particular custom. The paper
proposed to accumulate the prediction errors of consecutive
video images using an adaptive forward/backward prediction
scheme. This was needed to counter effect the relative motion
of the background objects. Once the motion is accumulated
into one or two images, spatial feature extraction was applied. It
was shown that the Radon transformation with horizontal
image projections result in precise and concise feature vectors
that are linearly separable. The experimental results revealed
that the proposed solution results in accurate classification rates
and works equally well for both of the aforementioned
costumes.
REFERENCES
[1] S.V. Stevenage, M.S. Nixon, and K. Vince, “Visual analysis
of gait as a cue to identity,” Applied Cognitive Psychology, vol.
13, pp. 513-526, Dec. 1999.
[2] G. Zhao, R. Chen, G. Liu, and H. Li, “Amplitude spectrum-
based gait recognition,” Proc. Int’l Conf. Automatic Face and
Gesture Recognition, pp. 23-28, 2004.
[3] S. Sarkar, P. Jonathon Phillips, Z. Liu, I. Robledo, P.
Grother, K. W. Bowyer, “The human id gait challenge problem:
data sets, performance, and analysis,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, 27(2), pp. 162 – 177,
Feb. 2005
[4] Z.i Liu and S. Sarkar, “Improved gait recognition by gait
dynamics normalization”, IEEE Transactions on pattern
analysis and machine intelligence, (28)6, June 2006.
0
0.2
0.4
0.6
0.8
1
10 20 30 40 50 60
Classification rate
Length of feature vector
Horizontal Projection Vertical projection
Interleaved projection Zonal coding
0.4
0.6
0.8
1
10 20 30 40 50 60
Classification rate
Length of feature vector
Training dataset based on mixed directions of walking
Training dataset based on a different direction of walking
With 2nd order reduced model expansion
0.4
0.6
0.8
1
10 20 30 40 50 60
Classification rate
Length of feature vector
Training dataset based on mixed directions of walking
Training dataset based on a different direction of walking
With 2nd order reduced model expansion
[5] P.J. Phillips, S. Sarkar, I. Robledo, P. Grother, and K.
Bowyer, “The gait identification challenge problem: data sets
and baseline algorithm,” Proc. Int’l Conf. Pattern Recognition,
pp. 385-388, 2002.
[6] L. Wang, T. Tan, H. Ning, and W. Hu, “Silhouette analysis-
based gait recognition for human identification,” IEEE Trans.
Pattern Analysis and Machine Intelligence, 25(12), pp. 1505-
1518, Dec. 2003.
[7] S.D. Mowbry and M.S. Nixon, “Automatic Gait Recognition
via Fourier Descriptors of Deformable Objects,” Proc. Conf.
Audio Visual Biometric Person Authentication, pp. 566-573,
2003.
[8] Z. Liu and S. Sarkar, “Simplest representation yet for gait
recognition: averaged silhouette,” Proc. Int’l Conf. Pattern
Recognition, vol. 4, pp. 211-214, 2004.
[9] A. Johnson and A. Bobick, “A multi-view method for gait
recognition using static body parameters,” Proc. Int’l Conf.
Audio and Video-Based Biometric Person Authentication, pp.
301-311, 2001.
[10] X. Yang, Y. Zhou, T. Zhang, G. Shu, J. Yang, “Gait
recognition based on dynamic region analysis,” Signal
Processing, 88(9), pp. 2350-2356, September 2008.
[11] J. Wu, J. Wang, L. Liu, “Feature extraction via KPCA for
classification of gait patterns,” Human Movement Science,” pp.
393-411, 26(3), June 2007.
[12] N. Boulgouris, Z. Chi, “Human gait recognition based on
matching of body components,” Pattern
Recognition,” 40(6), pp. 1763-1770, June 2007.
[13] J. Han and B. Bhanu, “Statistical feature fusion for gait-
based human recognition,” Proc. IEEE Conf. Computer Vision
and Pattern Recognition, vol. 2, pp. 842-847, June 2004.
[14] Y. Liu, R. Collins, and Y. Tsin, “Gait sequence analysis
using frieze patterns,” Proc. European Conf. Computer Vision,
pp. 657-671, May 2002.
[15] L. Wang, T. Tan, H. Ning, and W. Hu, “Silhouette analysis-
based gait recognition for human identification,” IEEE
Transactions on pattern analysis and machine intelligence,
25(12), December 2003.
[16] M.-H. Cheng, M.-F. Ho, C.-L. Huang, “Gait analysis for
human identification through manifold learning and HMM,”
Pattern Recognition, 41(8), pp. 2541-2553, August 2008.
[17] T. Shanableh and K. Assaleh, “Telescopic vector
composition and polar accumulated motion residuals for
feature extraction in Arabic Sign Language recognition,”
EURASIP Journal on Image and Video Processing, vol. 2007,
Article ID 87929, 10 pages, 2007. doi:10.1155/2007/87929.
[18] K. Assaleh, and H. Al-Nashash, “A Novel Technique for
the Extraction of Fetal ECG Using Polynomial Networks,”
IEEE Transactions on Biomedical Engineering, 52(6), pp. 1148
– 1152, June 2005.