ArticlePDF Available

Measures of inaccuracy in record values

Authors:

Abstract and Figures

In this paper , we consider a measure of inaccuracy between distributions of the nth record value and parent random variable. We also propose the measure of residual inaccuracy of record values and study characterization results of dynamic cumulative residual inaccuracy measure. We discuss some properties of the proposed measures.
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=lsta20
Communications in Statistics - Theory and Methods
ISSN: 0361-0926 (Print) 1532-415X (Online) Journal homepage: http://www.tandfonline.com/loi/lsta20
Measures of inaccuracy in record values
S. Tahmasebi & S. Daneshi
To cite this article: S. Tahmasebi & S. Daneshi (2018) Measures of inaccuracy in record
values, Communications in Statistics - Theory and Methods, 47:24, 6002-6018, DOI:
10.1080/03610926.2017.1404102
To link to this article: https://doi.org/10.1080/03610926.2017.1404102
Published online: 05 Dec 2017.
Submit your article to this journal
Article views: 39
View Crossmark data
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS
, VOL. , NO. , –
https://doi.org/./..
Measures of inaccuracy in record values
S. Tahmasebiaand S. Daneshib
aDepartment of Statistics, Persian Gulf University, Bushehr, Iran; bDepartment of Statistics, Shahrood University
of Technology, Iran
ARTICLE HISTORY
Received  January 
Accepted  November 
KEYWORDS
Cumulative residual
inaccuracy; measure of
inaccuracy; record values.
MATHEMATICS SUBJECT
CLASSIFICATION
N; B
ABSTRACT
In this paper , we consider a measure of inaccuracy between distri-
butions of the nth record value and parent random variable. We also
propose the measure of residual inaccuracy of record values and study
characterization results of dynamic cumulative residual inaccuracy
measure. We discuss some properties of the proposed measures.
1. Introduction
Suppose that Xand Yare two non negative random variables with distribution functions
F(x),G(x)and reliability functions ¯
F(x),¯
G(x),respectively.If f(x)is the actual probability
density function(pdf) corresponding to the observations and g(x)is the density assigned by
the experimenter, then the inaccuracy measure of Xand Yis dened by Kerridge (1961)as
I(X,Y)=I(f,g)=−+∞
0
f(x)log g(x)dx (1.1)
Taneja, Kumar, and Srivastava (2009) dened a dynamic measure of inaccuracy associated
with two residual lifetime distributions Fand Gcorresponding to the Kerridge measure of
inaccuracy given by
I(X,Y;t)=−+∞
t
f(x)
¯
F(t)log g(x)
¯
G(t)dx (1.2)
Kumar and Taneja (2015) dened the cumulative residual inaccuracy based on ¯
F(x),¯
G(x)
as
I(¯
F,¯
G)=−+∞
0¯
F(x)log ¯
G(x)dx (1.3)
Now, let {Xm,m1}be a sequence of independent and identically distributed random vari-
ables with cumulative distribution function (cdf) Fand pdf f.AnobservationXjwill be called
an upper record value if its value exceeds all previous observations. Thus, Xjis an upper record
value if Xj>Xifor every i<j.Let{Xn=Rn}denote the nth upper record value arising from
the {Xm,m1}. Then the density function and survival function of Rnfor all integers n1,
CONTACT S. Tahmasebi tahmasebi@pgu.ac.ir Department of Statistics, Persian Gulf University, Bushehr, Iran.
Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/lsta.
©  Taylor & FrancisGroup, LLC
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 6003
which are denoted by fRnand ¯
FRn, respectively, are given by
fRn(x)=[log ¯
F(x)]n1
(n1)!f(x), x>0 (1.4)
¯
FRn(x)=
n1
j=0
[log ¯
F(x)]j
j!¯
F(x)=(n;−log ¯
F(x))
(n)(1.5)
where (a;x)is the incomplete gamma function which is dened as
(a;x)=+∞
x
ua1eudu
Record values apply in problems such as industrial stress testing, meteorological analysis,
hydrology, sporting and economics. In reliability theory, records values are used to study, for
example, technical systems which are subject to shocks, e.g., peaks of voltages. For more details
about records and their applications, one may refer to Arnold, Balakrishnan, and Nagaraja
(1992). Several authors have worked on measures of inaccuracy for ordered random variables.
Thapliyal and Taneja (2013) proposed the measure of inaccuracy between the ith order statis-
tic and the parent random variable. Recently Thapliyal and Taneja (2015)haveintroduced
the measure of residual inaccuracy of order statistics and prove a characterization result for
it. Motivated by some of the articles mentioned above, in this paper we aim to investigate some
applications of the previously mentioned measures of inaccuracy for upper record values. We
also propose dynamic measures of inaccuracy in record values and study their characteriza-
tion results. Specically, within the scope of this paper we purpose to study the cumulative
residual inaccuracy in upper record value and the parent random variable of a random sam-
ple. The paper is organized as follows: In Section 2, we consider a measure of inaccuracy
associated with fRnand fand derive useful expressions of its in some lifetime distributions.
In Section 3, we propose the measure of residual inaccuracy for Rnand study characterization
results of it. In Section 4, we nd some interesting properties of cumulative residual inaccu-
racy associated with ¯
FRnand ¯
F. Throughout this paper, the terms ’increasing’ and ’decreasing’
are used in non strict sense.
2. A measure of inaccuracy
A measure of inaccuracy associated with distribution of nth record value Rnand parent dis-
tribution function f(x)is dened as
I(fRn,f)=−+∞
0
fRn(x)log f(x)dx
=−+∞
0
[log ¯
F(x)]n1
(n1)!f(x)log f(x)dx
=−+∞
0
(wn)n1
(n1)!ewnlog(f(F1(1ewn)))dwn
=−E[log(f(F1(1eWn)))]
where Wnhas Gamma distribution with parameters nand 1 (refer to Baratpour, Ahmadi, and
Arghami 2007a). Baratpour, Ahmadi, and Arghami (2007a) obtained the upper and lower
6004 S. TAHMASEBI AND S. DANESHI
bounds for I(fRn,f)which depend on the hazard rate function λF(y)=f(y)
1F(y)as follows:
I(fRn,f)≤−BnA
λF(y)log f(y)dy (2.1)
and
I(fRn,f)≥−BnAc
λF(y)log f(y)dy (2.2)
where Bn=(n1)n1
(n1)!e(n1),A={y|f(y)1}and Acis complement of A.Inthefollowing
examples we obtain useful expressions of I(fRn,f)in some lifetime distributions.
Example 2.1. Let Xbe a random variable having the exponential distribution with mean
1
θ,θ>0, then I(fRn,f)=nlog θ.Note that for a xed value of n,I(fRn,f)is a decreasing
function of θ. Similarly, for a xed value of θinaccuracy increase in n.
Example 2.2. Let Xbe a random variable having the Pareto distribution with pdf
f(x)=θx+1),x>1,θ>0
then
I(fRn,f)=−log θ+θ+1
θ(n+1)
Notethatforaxedvalueofn,I(fRn,f)is a decreasing function of θ.Similarly,foraxed
value of θinaccuracy increase in n.
Example 2.3. Let Xbe a random variable having the Weibull distribution with pdf
f(x)=[λβx 1)]exp(λxβ)I(0,)(x), λ > 0,β>0
then
I(fRn,f)=−log βlog λ
ββ1
βψ(n)+n
where ψ(n)=
nln (n)is the digamma function. For a xed value of n2, if λ>1, then
I(fRn,f)is a decreasing function of β. Also for a xed value of β>0, I(fRn,f)is a decreas-
ing function of λ.Figure 1 shows the function I(fRn,f)for n=5andλ=2. It decreases in
inaccuracy for dierent values of β>0. Similarly, Figure 2 shows the function I(fRn,f)for
n=5andβ=2. It also decreases with increase in λ>0.
Example 2.4. Let XN(0,1),then
I(fRn,f)=1
2log(2π)++∞
0
x2ex2
2[log(1(x))]n1
2(n)2πdx
where (x)is the cdf of standard normal distribution. Note that I(fRn,f)is a decreasing
function of n.Figure 3 shows decrease in inaccuracy for dierent values of n>3.
Example 2.5. Let Xbe a random variable with pdf
f(x)=αλeλx
[eλx(1α)]2I(0,)(x), α > 0>0
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 6005
Figure . Plot of I(fRn
,f)for n=5and λ=2.
Figure . Plot of I(fRn
,f)for n=5and β=2.
Figure . Plot of I(fRn
,f)for n>3.
6006 S. TAHMASEBI AND S. DANESHI
Figure . Plot of I(fRn
,f)when f(x)=3(1x)2,0<x<1.
then
I(fRn,f)=nlog λ+
i=1
(α1
α)i
i(i+1)n
In Examples 2.1 and 2.2,f(x)is decreasing in xand I(fRn,f)is increasing in n.But,this
is not true in general, for example f(x)=3(1x)2,0<x<1, is decreasing in xwhile
I(fRn,f)is decreasing in n(see Figure 4).
Now we can prove an important property of inaccuracy measure using some properties of
stochastic ordering. For that we present the following denitions:
1.TherandomvariableXis said to be smaller than Yaccording to stochastically
ordering (denoted by Xst Y)ifP(Xx)P(Yx)for all x.Itisknownthat
Xst YE(φ(X)) E (Y)) for all increasing functions φ(see Shaked and
Shantikumar 2007).
2.TherandomvariableXis said to be smaller than Yin likelihood ratio order-
ing(denoted by Xlr Y)if g(x)
f(x)is increasing in x.
3. We say that Xis smaller than Yin the hazard rate order, denoted by Xhr Y,if ¯
G(x)
¯
F(x)is
increasing with respect to x.
4. A random variable Xis said to be smaller than a random variable Yin the increasing
convex order, denoted by Xicx Y,ifE(φ(X)) E (Y)) for all increasing convex
functions φsuch that the expectations exist.
5. A non negative random variable Xis said to have increasing (decreasing) failure rate
IFR (DFR) if λF(x)=f(x)
¯
F(x)is increasing(decreasing) in x.
6. A non negative random variable Xwith cdf Fis said to have increasing(decreasing)
failure rate average IFRA(DFRA) if λF(x)
xis increasing (decreasing) function in x>0.
Note that IFR and DFR classes of distributions are included to IFRA and DFRA classes
of distributions, respectively.
Theorem 2.1. If f (x)is increasing in x, then I (fRn,f)is decreasing in n.
Proof. Since the ratio fWn+1(x)
fWn(x)=x
nis an increasing function of x,itfollowsbyShakedand
Shanthikumar (2007)thatWnlr Wn+1, which implies Wnst Wn+1.Alsoitisgiventhat
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 6007
f(F1(1ex)) is an increasing in x,andthus
E[log(f(F1(1eWn+1)))]≤−E[log(f(F1(1eWn)))].
This completes the proof.
3. Measure of residual inaccuracy for Rn
We propose the dynamic residual measure of inaccuracy between fRnand fas follows:
I(fRn,f;t)=−+∞
t
fRn(x)
¯
FRn(t)log f(x)
¯
F(t)dx
=log ¯
F(t)1
¯
FRn(t)+∞
t
fRn(x)log f(x)dx (3.1)
Note that limt0I(fRn,f;t)=I(fRn,f).Sincelog ¯
F(t)0fort0, we have
I(fRn,f;t)≤− 1
¯
FRn(t)+∞
t
fRn(x)log f(x)dx
≤− 1
¯
FRn(t)+∞
0
fRn(x)log f(x)dx =I(fRn,f)
¯
FRn(t)
Proposition 3.1. Let M =f(m)<where m =sup{x;f(x)M}is the mode of the distri-
bution. Then
I(fRn,f;t)log ¯
F(t)
M
Proof. Let mbethemodeofthedistribution,hencelogf(x)log M. Using this fact the
proof is complete.
Baratpour, Ahmadi, and Arghami (2007b) studied characterizations based on Shannon
entropy of order statistics and record values. Thapliyal and Taneja (2015)focusedonchar-
acterization results based on the dynamic residual inaccuracy of order statistics and proved
that its uniquely determine the distribution function. Consider a problem of nding sucient
condition for the uniqueness of the solution of the initial value problem (IVP)
dy
dx =f(x,y), y(x0)=y0
where fis a given function of two variables whose domain is a region DR2,(x0,y0)is a
specied point in D,yis the unknown function. By the solution of the IVP on an interval
IR,wemeanasuchthat(i) fis dierentiable on I, (ii) the growth of φlies in D, (iii)
φ(x0)=y0and (iv) ´
φ(x)=φ(x(x0)),forallxI. The following theorem together with
other results will help in proving our characterization result.
Lemma 3.2. Suppose that the function f is continuous in a domain D R2,andlet f satisfya
Lipschitz condition (with respect to y) in D, namely
|f(x,y1)f(x,y2)|≤ k|y1y2|,k>0 (3.2)
for every point (x,y1)and (x,y2)in D. Then the function y =f(x)satisfy the initial value
problem y =f(x,y)and φ(x0)=y0,xI, is unique.
Proof. See Gupta and Kirmani (2008).
6008 S. TAHMASEBI AND S. DANESHI
For any function f(x,y)of two variables dened in DR2,wenowpresentasucient
condition which guarantees that the Lipschitz condition is satised in D.
Lemma 3.3. Suppose that the function f is continuous in a convex region D R2.Suppose
further that f
yexists and is continuous in D. Then the function f satises Lipschitz condition
in D.
Proof. See Gupta and Kirmani (2008).
Theorem 3.4. Let X be a non negative continuous random variable with survival function ¯
F.
LetthedynamicresidualinaccuracyofthenthrecordvaluedenotedbyI(fRn,f;t)<,t 0.
Then I(fRn,f;t)uniquely determines the survival function ¯
FoftherandomvariableX.
Proof. From (3.1), we have
I(fRn,f;t)=log ¯
F(t)1
¯
FRn(t)+∞
t
fRn(x)log f(x)dx (3.3)
Dierentiating both sides of (3.3)withrespecttotwe obtain:
d
dt [I(fRn,f;t)]=−λF(t)+λFRn(t)[I(fRn,f;t)+logF(t))]
=−λF(t)+c(tF(t)[I(fRn,f;t)+logF(t))]
=λF(t)1+c(t)[I(fRn,f;t)+logF(t))](3.4)
where λFRn(t)is the hazard rate(failure rate) of Rnand
c(t)=[log ¯
F(x)]n1
(n1)!n1
j=0
[log ¯
F(x)]j
j!
Taki n g de r ivat i ve w i t h re s pe c t to tagain we get
´
λF(t)=λF(t)[I(fRn,f;t)c(t)(´
c(tF(t)+´
I(fRn,f;t)´
c(t)+´
I(fRn,f;t)c2(tF(t))]
c(t)[c(tF(t)+´
I(fRn,f;t)]
(3.5)
Suppose that there are two function Fand Fsuch that
I(fRn,f;t)=I(f
Rn,f;t)=h(t)
Then for all t0, from (3.5)weobtain
´
λF(t)=ψ(t
F(t)), ´
λF(t)=ψ(t
F(t))
where
ψ(t,y)=y[h(t)c(t)(´
c(t)y+´
h(t)´
c(t)+´
h(t)c2(t)y)]
c(t)[c(t)y+´
h(t)]
Using Lemmas 3.2 and 3.3 we have, λF(t)=λF(t),forallt. By noting that the hazard rate
function uniquely characterizes the distribution function, we complete the proof.
Analogous to the measure (3.1) a dynamic past measure of inaccuracy between fRnand f
is given by
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 6009
˜
I(fRn,f;t)=−t
0
fRn(x)
FRn(t)log f(x)
F(t)dx
=log F(t)1
FRn(t)t
0
fRn(x)log f(x)dx
where limt→∞ ˜
I(fRn,f;t)=˜
I(fRn,f).Notethat
˜
I(fRn,f;t)I(fRn,f)
FRn(t)
and similarly ˜
I(fRn,f;t)log(F(t)
M).
Theorem 3.5. Let X be a non negative continuous random variable with distribution function F .
Let the dynamic past inaccuracy of the nth record value denoted by ˜
I(fRn,f;t)<,t0.
Then ˜
I(fRn,f;t)uniquely determines the distribution function F.
Proof. The proof is similar to that of Theorem 3.4.
4. Cumulative residual inaccuracy for Rn
We propose the cumulative residual measure of inaccuracy between ¯
FRnand ¯
Fas follows:
I(¯
FRn,¯
F)=−+∞
0¯
FRn(x)log ¯
F(x)dx
=−+∞
0
n1
j=0
[log ¯
F(x)]j
j!¯
F(x)log ¯
F(x)dx
=
n1
j=0+∞
0
[log ¯
F(x)]j+1
j!¯
F(x)dx
=
n1
j=0+∞
0
[log ¯
F(x)]j+1
j!×f(x)
f(x)
¯
F(x)
dx
=
n1
j=0
(j+1)+∞
0
[log ¯
F(x)]j+1
(j+1)!×f(x)
λF(x)dx
=
n1
j=0
(j+1)ERj+21
λF(X)(4.1)
where Rj+2is a random variable with density function fRj+2(x)=[log ¯
F(x)]j+1
(j+1)!f(x)and λF(x)
isthehazardratefunctionofX.Inthefollowingexample,wecalculateI(¯
FRn,¯
F)for some
specic lifetime distributions which are widely used in reliability theory and life testing.
Example 4.1.
(a) If Xis uniformly distributed in [0], then it is easy to see that I(¯
FRn,¯
F)=
θn1
j=0
j!(j+1)2
2j+2,for all integers n1.
6010 S. TAHMASEBI AND S. DANESHI
(b) If Xhas a Weibull distribution with survival function ¯
F(x)=eαxβ,x0,β>0,
then for all integers n1, we obtain I(¯
FRn,¯
F)=1
βα
1
βn1
j=0
(1
β+j)!
j!.
(c) If Xhas a Pareto distribution with survival function ¯
F(x)=(λ
x+λ)γ,x0,γ>
1,λ > 0, then I(¯
FRn,¯
F)=λ
γ1n1
j=0(j+1)( γ
γ1)j+1, for all integers n1.
(d) Let Xbe an exponential distribution with mean 1
λ>0, then we obtain I(¯
FRn,¯
F)=
n(n+1)
2λwhich is an increasing function of n.
(e) Let Xbe a non negative random variable which has an Exponential-Inverse Gaussian
distribution with survival function ¯
F(x)=eα
β[11+2βx],x0,β>0, then for all
integers n1, we obtain I(¯
FRn,¯
F)=1
α2n1
j=0j!(j+1)2[α+(j+2].
Proposition 4.1. Let X be an absolutely continuous non negative random variable with survival
function ¯
F. Then, we have
I(¯
FRn,¯
F)=
n1
j=0
(j+1)[μj+2μj+1]
where μn=+∞
0¯
FRn(x)dx.
Proof. From (1.5)and(4.1)wehave
I(¯
FRn,¯
F)=
n1
j=0+∞
0
[log ¯
F(x)]j+1
j!¯
F(x)dx
=
n1
j=0
(j+1)+∞
0
[¯
FRj+2(x)¯
FRj+1(x)]dx
=
n1
j=0
(j+1)[μj+2μj+1]
Proposition 4.2. Let a,b>0.Forn=1,2,...it holds that
I(¯
FaRn+b,¯
FaX+b)=aI (¯
FRn,¯
F)
Proof. From (4.1) and noting that ¯
FaX+b(x)=¯
F(xb
a),wehave
I(¯
FaRn+b,¯
FaX+b)=−+∞
0
n1
j=0
[log ¯
FaX+b(x)]j+1
j!¯
FaX+b(x)dx =aI (¯
FRn,¯
F)(4.2)
Proposition 4.3. Let X be an absolutely continuous non negative random variable with survival
function ¯
F. Then, we have
I(¯
FRn,¯
F)=
n1
j=0
1
j!
0
λF(z)
z
[log ¯
F(x)]j¯
F(x)dxdz
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 6011
Proof. By (4.1)andthefactthatlog ¯
F(x)=x
0λF(z)dz,wehave
I(¯
FRn,¯
F)=
n1
j=0+∞
0
[log ¯
F(x)]j+1
j!¯
F(x)dx
=
n1
j=0+∞
0x
0
λF(z)dz[log ¯
F(x)]j
j!¯
F(x)dx
=
n1
j=0
1
j!+∞
0
λF(z)
z
[log ¯
F(x)]j¯
F(x)dxdz
Proposition 4.4. Let X be a non negative random variable with survival function ¯
F. Then, we
have
I(¯
FRn,¯
F)=
n1
j=0
1
j!E[X(log ¯
F(X))j+1]
n1
j=0
j+1
j!E[X(log ¯
F(X))j]
Proof. From (4.1) and using Fubini’s theorem, we obtain
I(¯
FRn,¯
F)=
n1
j=0+∞
0
[log ¯
F(x)]j+1
j!¯
F(x)dx
=
n1
j=0+∞
0
[log ¯
F(x)]j+1
j!+∞
x
f(z)dzdx
=
n1
j=0+∞
0
f(z)
j!z
0
[log ¯
F(x)]j+1dxdz
The result follows easily with integrating by parts.
Proposition 4.5. Let X be an absolutely continuous non negative random variable with
I(¯
FRn,¯
F)<,forallintegersn1. Then, we have
I(¯
FRn,¯
F)=
n1
j=0
1
j!E[hj+1(X)] (4.3)
where
hj+1(x)=x
0
[log ¯
F(z)]j+1dz,x0
Proof. From (4.1) and using Fubini’s theorem, we obtain
I(¯
FRn,¯
F)=
n1
j=0
0
[log ¯
F(z)]j+1
j!¯
F(z)dz
=
n1
j=0
1
j!
0
z
f(x)dx[log ¯
F(z)]j+1dz
=
n1
j=0
1
j!
0
f(x)x
0
[log ¯
F(z)]j+1dzdx =
n1
j=0
1
j!E[hj+1(X)]
The next propositions give some lower and upper bounds for I(¯
FRn,¯
F).
6012 S. TAHMASEBI AND S. DANESHI
Proposition 4.6. For a non negative random variable X and for all integers n 1,itholdsthat
I(¯
FRn,¯
F)
n1
j=0
[E(X)]j+1
j!(4.4)
where
E(X)=−+∞
0¯
F(x)log ¯
F(x)dx (4.5)
is the cumulative residual entropy(CRE)(see Rao et al. 2004).
Proof. Since ¯
F(x)(¯
F(x))n, for all integers n1, we have
I(¯
FRn,¯
F)=
n1
j=0
1
j!
0¯
F(x)[log ¯
F(x)]j+1dx
n1
j=0
1
j!
0
[¯
F(x)log ¯
F(x)]j+1dx
By noting that g(x)=xn, for all integers n1, is a convex function, Jensen’s inequality gives
I(¯
FRn,¯
F)=
n1
j=0
1
j!
0¯
F(x)[log ¯
F(x)]j+1dx
n1
j=0
1
j!
0¯
F(x)log ¯
F(x)dxj+1
which proof follows by recalling (4.4).
Corollary 4.7. Let X be an absolutely continuous non negative random variable with cdf F.
Then, we have
I(¯
FRn,¯
F)
n1
j=0
1
j!
0
F(x)¯
F(x)dxj+1
Proof. UsingProposition1ofRao(2005)alowerboundforCREis
E(X)+∞
0
F(x)¯
F(x)dx
Hence, the proof is completed by Proposition 4.5.
Corollary 4.8. Let X be a continuous non negative random variable with survival function ¯
F
and nite mean E (X)<. Then, we have
I(¯
FRn,¯
F)
n1
j=0
1
j!E(X)gini[X]j+1
where gini[.]is the Gini index, a celebrated measure of income inequality denoted by (see Wang
1998)
gini[X]=1
0[¯
F(x)]2dx
E(X)(4.6)
Proof. From Proposition 5.1 of Wang (1998), we have
+∞
0
F(x)¯
F(x)dx =1
2E[|XY|]=E(X)gini[X]
where Xand Yare independent and have the same distributions. Hence, Corollary 4.7 com-
pletes the proof.
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 6013
Corollary 4.9. Let X be a non negative absolutely continuous random variable. Then,
I(¯
FRn,¯
F)
n1
j=0
Cj+1
j![e(j+1)H(X)]
where C =exp(1
0log(x|log x|)dx)=0.2065,and H (X)=−+∞
0f(x)log f(x)dx is the
Shannon entropy of X.
Proof. The proof follows by recalling (4.4)andProposition 4.2 of Di Crescenzo and
Longobardi(2009).
Proposition 4.10. Let X be an absolutely continuous non negative random variable with nite
mean μ=E(X). Then, we have
I(¯
FRn,¯
F)
n1
j=0
1
j![hj+1(μ)]
where hj+1(μ) is dened in Proposition 4.5.
Proof. By noting that hj+1(.) is a convex function, applying Jensens inequality in (4.3)the
proof is completed.
Proposition 4.11. Let X be a non negative random variable with absolutely continuous cumu-
lative distribution function F (x).Thenforn=1,2,... we have
I(¯
FRn,¯
F)
n1
j=0
1
j!
0
[log ¯
F(x)]j+1dx
Proof. By using (4.1)proofiseasy.
Proposition 4.12. Let X be a non negative random variable with survival function ¯
F(x).Then
for n =1,2,...,wehave
I(¯
FRn,¯
F)
n1
j=0
j+1
i=0
(1)i(j+1)
i!(j+1i)!
0
[¯
F(x)]i+1dx
Proof. Since log ¯
F(x)1¯
F(x),theprooffollowsbyrecalling(4.1).
Remark 4.1. In analogy with (4.1), a measure of cumulative residual inaccuracy associated
with ¯
Fand ¯
FRnis given by
I(¯
F,¯
FRn)=E(X)E
(1U)log
n1
j=0
(log(1U))j
j!
f(F1(1U))
where Uis uniformly distributed in (0,1).
Inthefollowing,weobtainsomeresultsofI(¯
FRn,¯
F)and its connection with notions of
reliability theory.
Proposition 4.13. If X is DFR, then for n =1,2,...,wehave
I(¯
FRn+1,¯
F)I(¯
FRn,¯
F)
n+1
i=1
ERi1
λF(X)(4.7)
6014 S. TAHMASEBI AND S. DANESHI
Proof. Suppose that fRnis the pdf of of n-record value Rn.Then,theratio fRn+1(t)
fRn(t)=log ¯
F(t)
n
is increasing in t. Therefore, Rnlr Rn+1, and this implies that Rnst Rn+1, i.e., ¯
FRn¯
FRn+1.
Hence, if Xis DFR and λF(x)is its hazard rate, then 1
λF(x)is increasing function of x.So,from
(4.1) and the equivalency (1.A.7) in Shaked and Shanthikumar (2007), we have
I(¯
FRn+1,¯
F)=
n
j=0
(j+1)ERj+21
λF(X)
n
j=0
(j+1)ERj+11
λF(X)
=
n1
i=−1
(i+2)ERi+21
λF(X)
=
n1
i=0
(i+2)ERi+21
λF(X)+ER11
λF(X)
=
n1
i=0
(i+1)ERi+21
λF(X)+
n1
i=0
ERi+21
λF(X)+ER11
λF(X)
=I(¯
FRn,¯
F)+
n+1
i=1
ERi1
λF(X)(4.8)
The proof is completed.
Proposition 4.14. Let X and Y be two non negative random variables with reliability functions
¯
F(x),¯
G(x),respectively.IfX hr YandXisDFR,then
I(¯
FRn,¯
F)I(¯
G˜
Rn,¯
G)(4.9)
for n =1,2,....
Proof. It is well known that Xhr Yimplies Xst Y(see Shaked and Shanthikumar 2007).
Hence, we have
¯
FRj+2¯
G˜
Rj+2
where ¯
G˜
Rj+2is the survival function of ˜
Rj+2.Thatis,Rj+2st ˜
Rj+2holds. This is equivalent
(see Shaked and Shanthikumar 2007)tohave
E(φ(Rj+2)) E ( ˜
Rj+2))
for all increasing functions φsuch that these expectations exist. Thus, if we assume that Xis
DFR and λF(x)is its failure rate, then 1
λF(x)is increasing and we have
I(¯
FRn,¯
F)=
n1
j=0
(j+1)ERj+21
λF(X)
n1
j=0
(j+1)E˜
Rj+21
λF(X)
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 6015
On the other hand, Xhr Yimplies that the respective failure rate functions satisfy λF(x)
λG(y).Hence,wehave
n1
j=0
(j+1)E˜
Rj+21
λF(X)
n1
j=0
(j+1)E˜
Rj+21
λG(Y)=I(¯
G˜
Rn,¯
G)
Therefore, using both expressions we obtain I(¯
FRn,¯
F)I(¯
G˜
Rn,¯
G)
Proposition 4.15. Let X and Y be two non negative random variables with reliability functions
¯
F(x),¯
G(x),respectively.IfX icx Y,then
I(¯
FRn,¯
F)I(¯
G˜
Rn,¯
G)
Proof. Since gj+1(.) is an increasing convex function for j0, it follows by Shaked and
Shanthikumar (2007)thatXicx Yimplies hj+1(X)icx hj+1(Y).Hence,theproofiscom-
pleted by recalling the denition of increasing convex order and Proposition 4.5.
Proposition 4.16. IfXisIFRA(DFRA),thenforn=1,2,...,wehave
I(¯
FRn,¯
F)()
n1
j=0
1
j!E[X(log ¯
F(X))j] (4.10)
Proof. From (4.1), we have
I(¯
FRn,¯
F)=
n1
j=0+∞
0
[log ¯
F(x)]j
j![log ¯
F(x)]¯
F(x)dx (4.11)
Now, since Xis IFRA (DFRA), log ¯
F(x)
xis increasing (decreasing) with respect to x>0, which
implies that
¯
F(x)log ¯
F(x)()xf(x), x>0 (4.12)
Hence,theproofiscompletedbynoting(4.10)and(4.11).
Proposition 4.17. Let X and Y be two non negative random variables with survival function
¯
F(x)and ¯
G(x),respectively.IfX hr Y, then for n =1,2,...,itholdsthat
I(¯
FRn,¯
F)
E(X)I(¯
GRn,¯
G)
E(Y)
Proof. By noting that the function hj+1(x)=x
0[log ¯
F(z)]j+1dz is an increasing convex
function, under the assumption Xhr Y,itfollowsbyShakedandShanthikumar(2007),
n1
j=0
1
j!E[hj+1(X)]
E(X)
n1
j=0
1
j!E[hj+1(Y)]
E(Y)
Hence, the proof is completed by recalling (4.3).
Assume that X
θdenotes a non negative absolutely continuous random variable with the
survival function ¯
Hθ(x)=[¯
F(x)]θ,x0. This model is known as a proportional hazards
rate model. We now obtain the cumulative residual measure of inaccuracy between ¯
HRnand
¯
Has follows:
6016 S. TAHMASEBI AND S. DANESHI
I(¯
HRn,¯
H)=−+∞
0¯
HRn(x)log(¯
H(x))dx
=
n1
j=0
θj+1+∞
0
[log ¯
F(x)]j+1
j![¯
F(x)]θdx (4.13)
Proposition 4.18. If θ()1,thenforallintegersn1,wehave
I(¯
HRn,¯
H)()
n1
j=0
(j+1j+1Ej+1(X)
where Ej+1(X)is the generalized cumulative residual entropy of X, dened by Psarrakos and
Navarro(2013)as
Ej+1(X)=+∞
0
¯
F(x)[log ¯
F(x)]j+1
(j+1)!dx
Proof. Suppose that θ()1, then it is clear [ ¯
F(x)]θ()¯
F(x), and hence (4.12)yields
I(¯
HRn,¯
H)()
n1
j=0
(j+1j+1Ej+1(X)
In the remainder of this section, we study dynamic version of I(¯
FRn,¯
F).LetXbe the lifetime
of a system under condition that the system has survived up to age t.Analogously,wecanalso
consider the dynamic version of I(¯
FRn,¯
F)as
I(¯
FRn,¯
F;t)=−+∞
t
¯
FRn(x)
¯
FRn(t)log ¯
F(x)
¯
F(t)dx
=log ¯
F(t)+∞
t
¯
FRn(x)
¯
FRn(t)log ¯
F(x)dx
=log ¯
F(t)+1
¯
FRn(t)
n1
j=0+∞
t
[log ¯
F(x)]j+1
j!¯
F(x)dx (4.14)
Note that limt0I(¯
FRn,¯
F;t)=I(¯
FRn,¯
F).Sincelog ¯
F(t)0fort0, we have
I(¯
FRn,¯
F;t)1
¯
FRn(t)
n1
j=0+∞
t
[log ¯
F(x)]j+1
j!¯
F(x)dx
1
¯
FRn(t)
n1
j=0+∞
0
[log ¯
F(x)]j+1
j!¯
F(x)dx =I(¯
FRn,¯
F)
¯
FRn(t)
Theorem 4.19. Let X be a non negative continuous random variable with distribution function
F. Let the dynamic cumulative inaccuracy of the nth record value denoted by I(¯
FRn,¯
F;t)<
,t 0.ThenI(¯
FRn,¯
F;t)characterizes the distribution function.
Proof. From (4.13)wehave
I(¯
FRn,¯
F;t)=log ¯
F(t)+1
¯
FRn(t)
n1
j=0+∞
t
[log ¯
F(x)]j+1
j!¯
F(x)dx (4.15)
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 6017
Dierentiating both side of (4.14)withrespecttotwe obtain
d
dt [I(¯
FRn,¯
F;t)]=−λF(t)log ¯
F(t)+λFRn(t)[I(¯
FRn,¯
F;t)log ¯
F(t)]
=−λF(t)log ¯
F(t)+c(tF(t)[I(¯
FRn,¯
F;t)log ¯
F(t)]
=−log ¯
F(t)+λF(t)1+c(t)(I(¯
FRn,¯
F;t)log ¯
F(t))
Taki n g de r ivat i ve w i t h re s pe c t to tagain we obtain
´
λF(t)
=
λF(t)´
c(t)(´
I(¯
FRn,¯
F;t)+λF(t)+log ¯
F(t)) +c2(tF(t)´
I(¯
FRn,¯
F;t)+λF(t)+1
c(t)
c(t)log ¯
F(t)c(t)´
I(¯
FRn,¯
F;t).
(4.16)
Suppose that there are two functions Fand Fsuch that
I(¯
FRn,¯
F;t)=I(¯
F
Rn,¯
F;t)=z(t)
Then for all t,from(4.15)weget
´
λF(t)=ϕ(t
F(t)), ´
λF(t)=ϕ(t
F(t))
where
ϕ(t,y)=
y´
c(t)´
z(t)+y+w(t)+c2(t)y´
z(t)+y+1
c(t)
c(t)w(t)c(t)´
z(t)
and w(t)=log ¯
F(t).ByusingLemmas 3.2 and 3.3 we have, λF(t)=λF(t),forallt.Since
the hazard rate function characterizes the distribution function uniquely, we complete the
proof.
Analogous to the measure (4.1), the cumulative measure of inaccuracy between FLn(dis-
tribution function of nth lower record value Ln)andFis presented as
I(FLn,F)=−+∞
0
FLn(x)log (F(x))dx
=−+∞
0
n1
j=0
[log F(x)]j
j!F(x)log (F(x))dx
=
n1
j=0+∞
0
[log F(x)]j+1
j!F(x)dx
=
n1
j=0+∞
0
[log F(x)]j+1
j!×f(x)
f(x)
F(x)
dx
=
n1
j=0
(j+1)ELj+21
˜
λF(X)(4.17)
where ˜
λF(x)=f(x)
F(x)isthereversedhazardrateandLj+2is a random variable with density
function fLj+2(x)=[log F(x)]j+1f(x)
(j+1)!.Inthefuturework,wewillobtainresultsofI(FLn,F)and
study the dynamic version of cumulative measure of inaccuracy using the past lifetime.
6018 S. TAHMASEBI AND S. DANESHI
5. Conclusions
In this paper, we discussed the concept of inaccuracy between fRnand f.Weproposedthe
measure of residual inaccuracy for Rnand studied characterization results of it. It is also
proved that I(fRn,f;t)can uniquely determine the parent distribution F. Moreover, we stud-
ied some new basic properties of I(¯
FRn,¯
F)and I(¯
FRn,¯
F;t)such as the eect of linear trans-
formation, and stochastic order properties. We also constructed bounds for characterization
results of I(¯
FRn,¯
F). These concepts can be applied in measuring the inaccuracy contained in
the associated residual lifetime.
Acknowledgments
We are grateful to the editor and referees for their constructive comments.
References
Arnold,B.C.,N.Balakrishnan,andH.N.Nagaraja.1992.A rst course in order statistics.NewYork:
John Wiley and Sons.
Baratpour,S.,J.Ahmadi,andN.R.Arghami.2007a. Entropy properties of record statistics. Statistical
Papers 48 (1):197–213.
Baratpour,S.,J.Ahmadi,andN.R.Arghami.2007b. Some characterizations based on entropy of order
statistics and record values. Communications in Statistics—Theory and Methods 36:47–57.
Crescenzo, A. Di, and M. Longobardi. 2009. On cumulative entropies. Journal of Statistical Planning
and Inference 139:4072–87.
Gupta, R. C., and S. N. U. A. Kirmani. 2008. Characterizations based on conditional mean function.
Journal of Statistical Planning and Inference 138:964–70.
Kerridge, D. F. 1961.Inaccuracyandinference.Journal of the Royal Statistical Society. Series B, Statistical
methodology 23 (1):184–94.
Kumar, V., and H. C. Taneja. 2015. Dynamic cumulative residual and past inaccuracy measures. Journal
of Statistical Theory and Applications 14 (4):399–412.
Psarrakos, G., and J. Navarro. 2013. Generalized cumulative residual entropy and record values. Metrika
76:623–40.
Rao, M., Y. Chen, B. C. Vemuri, and F. Wang. 2004. Cumulative residual entropy: A new measure of
information. IEEE Transactions on Information Theory 50 (6):1220–8.
Rao, M. 2005. More on a new concept of entropy and information. Journal of Theoretical Probability
18:967–81.
Shaked, M., and J. G. Shanthikumar. 2007.Stochastic orders.NewYork:Springer.
Taneja,H.C.,V.Kumar,andR.Srivastava.2009.Adynamicmeasureofinaccuracybetweentworesidual
lifetime distributions. International Mathematical Fourm 4 (25):1213–20.
Thapliyal, R., and H. C. Taneja. 2013. A measure of inaccuracy in order statistics. Journal of Statistical
Theory and Applications 12 (2):200–7.
Thapliyal, R., and H. C. Taneja. 2015. On residual inaccuracy of order statistics. Statistics & Probability
Letters 97:125–31.
Wang, S . 1998. An actuarial index of the right-tail risk. NorthAmericanActuarialJournal2:88–101.
... Molloy and Ford (2013) presented a novel method of online HMM parameter estimation based on an information-theoretic Kerridge inaccuracy rate (KIR) concept. For more on Kerridge's inaccuracy see Kerridge (1961), Nath (1968), Molloy and Ford (2013), Goel et al. (2018), Tahmasebi and Daneshi (2018) etc. Let X and Y be two continuous random variables with cdf's F and G, and pdf's f X (x) and g Y (x), respectively. ...
Article
Full-text available
Using different extropies of k record values various characterizations are provided for continuous symmetric distributions. The results are in addition to the results of Ahmadi (Stat Pap 62:2603–2626, 2021). These include cumulative residual (past) extropy, generalised cumulative residual (past) extropy, also some common Kerridge inaccuracy measures. Using inaccuracy extropy measures, it is demonstrated that continuous symmetric distributions are characterised by an equality of information in upper and lower k-records. The applicability of the suggested test is then demonstrated using three real data sets by observing the p-values of our test.
... Thapliyal and Taneja [18] have introduced the measure of residual inaccuracy of order statistics and proved a characterization result for it. Tahmasebi and Daneshi [14] and Tahmasebi et al. [15] obtained some results for inaccuracy measures of record values. In this paper, we propose a weighted cumulative past (residual) inaccuracy of record values and study its characterization results. ...
Article
Full-text available
Abstract In this paper, we propose the weighted cumulative past (residual) inaccuracy for record values. For this concept, we obtain some properties and characterization results such as relationships with other reliability functions, bounds, stochastic ordering and effect of linear transformation. Dynamic versions of this weighted measure are considered. We also study a problem of estimating the weighted cumulative past inaccuracy by means of the empirical cumulative inaccuracy for lower record values.
Article
Full-text available
In this paper, we have developed measures of dynamic cumulative residual and past inaccuracy. We have studied characterization results under proportional hazard model in case of dynamic cumulative residual inaccuracy and under proportional reversed hazard model in case of dynamic cumulative past inaccuracy measure. We have characterized certain specific lifetime distributions using the measures proposed. Some generalized results have also been considered.
Article
Full-text available
In this article, we consider a measure of inaccuracy between distributions of the i th order statistics and par-ent random variable. It is shown that the inaccuracy measure characterizes the distribution function of parent random variable uniquely. We also discuss some properties of the proposed measure.
Article
We propose the measure of residual inaccuracy of order statistics and prove a characterization result for it. Further we characterize some specific lifetime distributions using residual inaccuracy of the first order statistics. We also discuss some properties of the proposed measure.
Article
A common characteristic for many insurance risks is the right-tail risk, representing low-frequency, large-loss events. In this paper I propose a measure of the right-tail risk by defining the right-tail deviation and the right-tail index. I explain how the right-tail deviation measures the right-tail risk and compare it to traditional measures such as standard deviation, the Gini mean, and the expected policyholder deficit. The right-tail index is applied to some common parametric families of loss distributions.
Article
The idea of uncertainty, as developed in communication theory, is generalized to “inaccuracy”. The applications of this inaccuracy to the problems of statistical inference are discussed.
Article
The Shannon entropy of a random variable has become a very useful tool in Probability Theory. In this paper we extend the concept of cumulative residual entropy introduced by Rao et al. (in IEEE Trans Inf Theory 50:1220–1228, 2004). The new concept called generalized cumulative residual entropy (GCRE) is related with the record values of a sequence of i.i.d. random variables and with the relevation transform. We also consider a dynamic GCRE obtained using the residual lifetime. For these concepts we obtain some characterization results, stochastic ordering and aging classes properties and some relationships with other entropy concepts.
Article
This paper addresses the problem of characterizing a distribution by means of a convex conditional mean function. The characterization is proved, under mild conditions, by means of showing the uniqueness of the solution of a certain non-linear differential equation.
Article
In the literature of information theory, Shannon entropy plays an important role and in the context of reliability theory, order statistics and record values are used for statistical modeling. The aim of this article is characterizing the parent distributions based on Shannon entropy of order statistics and record values. It is shown that the equality of the Shannon information in order statistics or record values can determine uniquely the parent distribution. The exponential distribution is characterized through maximizing Shannon entropy of record values under some constraints. The results are useful in the modeling problems.