ArticlePDF Available

Fisher information and truncated Gamma distribution

Authors:
  • Universitatea Crestina "Dimitrie Cantemir", Faculty of Economic Sciences Cluj-Napoca

Abstract

The Fisher information measure is well known in estimation theory. The objective of this paper is to give some definitions and some properties for the truncated Gamma distribution. Also, we investigate some measures of the information of the unknown parameters that appear in such a distribution.
FISHER’S INFORMATION MEASURES FOR A
TRUNCATED GAMMA DISTRIBUTION (II)
I:Mihoc )C:I:Fatu )
)Faculty of Mathematics and Computer Sciences, "Babe¸s-Bolyai
University, 3400 Cluj-Napoca;Romania. E-mail: imihoc@math.ubbcluj.ro
)Faculty of Economics at Christian University ”Dimitrie Cantemir”,
3400 Cluj-Napoca, Romania
Keywords:Multinomial distribution, exponential family, Fisher informa-
tion, e¢ cient estimator
Mathematics Subject Classi…cations (1991):62B10, 94A17
Abstract. The Fisher information measure is well known in estimation
theory. Its importance is demonstrated by the Cramér-Rao inequality which
relates the variance of an estimator to the Fisher information measure.
The objective of this paper is to give a de…nition and some properties
for the truncated Gamma distribution. Then, we shall determine measure
for the Fisher information.
1. De…nitions and some properties for a truncated Gamma
distribution [3]
We consider the probability space (; K; P );where is an arbitrary
nonempty set, called the set of elementary events; K aalgebra of subsets
of containing itself (the elements of Kare called events) and Pa
probability measure , that is, a non-negative -additive set function de…ned
on Ksuch that P() = 1:
Let Xbe a random variable de…ned on this probability space (; K; P ):
In the next, we consider that Xis a continuous random variable and its
probability density function f(x;1; 2)depends on two parameters, 1and
2;where = (1; 2);  2DR2:Dwill be called the parameter space.
To each value of ;  2D;there corresponds one member of the family
which will be denoted by the symbol ff(x;); 2Dg:
De…nition 1.1 The random variable Xhas the Gamma distribution
with parameters a > 0and b > 0if it is a continuous random variable and
its probability density function is of the following form
1
fX(x;a; b) = (0if x 0
ba
(a)xa1ebx if x > 0;(1.1)
where
(a) =
+1
Z
0
ta1etdt (1.2)
is Euler’s Gamma function or the complete Euler function.
For a such random variable the expected value E(X)and the variance
V ar X have the following values
E(X) = a
b; V ar X =a
b2; a; b 2R+:(1.3)
De…nition 1.2. We say that the continuous random variable Xhas a
Gamma distribution truncated to the left at X=; if its probability
density function, denoted by f ;is of the form
f (x;a; b) = 0if x < 
A(a; b)xa1ebx if x ;  2R;  0;(1.4)
where A(a; b)is a constant.
Lemma 1.1. If the parameter ais a positive integer, that is, a2N=
f1;2; :::gand b > 0;then for the constant A(a; b)we have the following
expression
A(a; b) = ba
eb a1
P
k=0
Ak
a1(b)(a1)k
;(1.5)
where
Ak
a1= (a1)(a2):::[(a1) (k1)]:(1.6)
Using this lemma (1.5) we can give the following de…nition.
De…nition 1.3. We say that the continuous random variable Xhas a
Gamma distribution truncated to the left at X=; with parameters
a2Nand b > 0;if its probability density function f is of the form
f (x;a; b) = 8
<
:
0if x < 
ba
eb
a1
P
k=0
Ak
a1(b)(a1)k
xa1ebx if x ; where 0:
(1.7)
2
Remark 1.1. If we consider = 0;then for the constant A(a; b)we
obtain the following value
A(a; b) = ba
(a); a; b > 0;  0;(1.8)
and, from the De…nition 1.3, we obtain just the De…nition 1.1 of the ordi-
nary Gamma distribution.
Lemma 1.2. If the parameters aand bare real and positive numbers;then
the constant A(a; b)has the following expression
A(a; b) = ba
(a)[1 b(a)] ; a; b > 0;  0;(1.9)
where
x(a) = F(x) = P(X < x)
=1
(a)
x
Z
0
ta1etdt; x > 0(1.10)
is the incomplete Gamma function of Euler or the probability distri-
bution function of the random variable X:
Using this lemma we obtain a new de…nition a Gamma distribution trun-
cated to the left.
De…nition 1.4. We say that the continuous random variable Xhas a
Gamma distribution truncated to the left at X=;with parameters
a > 0and b > 0;if its probability density function f is of the form
f (x;a; b) = (0if x < 
ba
(a)[1b(a)] xa1ebx if x ;  2R;  0:(1.11)
Remark 1.2. And in this case, then when = 0;we obtain the same
value (1.8) for the constant A(a; b):
Lemma 1.3. If the parameter a2N; axed;then the expressions
(1.6) and (1.9) are the same.
Theorem 1.1. Let X be a random variable with a Gamma distrib-
ution truncated to the left at X=and its probability density of the form
(1.11). Then for the mean value of X we have
E(X ) = a[1 b(a+ 1)]
b[1 b(a)] ; a > 0;  0; b > 0:(1.12)
3
Remark 1.3. If we consider = 0;then we obtain
E(X ) = E(X) = a
b;(1.13)
that is, just the mean value for a random variable Xwhich follows an ordi-
nary Gamma distribution.
Theorem 1.2. Let X be a random variable with a Gamma distrib-
ution truncated to the left at X=and its probability density of the form
(1.11). Then the variance of this random variable is given by
V ar X =a(a+ 1)
b2
[1 b(a+ 2)]
[1 b(a)] a2
b2
[1 b(a+ 1)]2
[1 b(a)]2; a; b > 0;  0:
(1.14)
Remark 1.4. If we consider = 0;then we obtain
V ar X =V ar X =a
b2;(1.15)
that is, just the variance of a random variable Xwhich follows an ordinary
Gamma distribution.
2. Fisher Information
Let Xbe a continuous random variable and its probability density func-
tion f(x;)depends on a parameter which values in a speci…ed parameter
space D; DR:In the next we suppose that the parameter is un-
known and we estimate a speci…ed function of ; g();with the help of
statistic b
=t(X1; X2; :::; Xn)which is based on a random sample of size n;
Sn(X) = (X1; X2; :::; Xn);where Xi; i = 1; n, are independent and identi-
cally distributed random variables as X, that is, f(xi;)f(x;);  2D;
i= 1; n:
If x1; x2; :::; xnare the observed experimental values of X1; X2; :::; Xn;
then the number t(x1; x2; :::; xn)is an estimate of and it is usually written
as b
0=t(x1; x2; :::; xn):
Because we have need by the notion of Fisher information measure we
rst recall some de…nitions.
De…nition 2.1. An estimator (a statistic) b
=t(X1; X2; :::Xn)is a
function of the random sample vector
Sn(X) = (X1; X2; :::; Xn)((2.1))
that estimates but is not dependent on :
De…nition 2.2. Any estimator b
=t(X1; X2; :::Xn)for is said to be
unbiased if and only if E(b
) = : Otherwise, the estimator is said to be
biased.The bias in estimating with b
is E(b
):
4
De…nition 2.3. Any statistic that converges stochastically to a para-
meter is called a consistent estimator of the parameter ; that is, if we
have
lim
n!1 Phb
"i= 1 f or all " > 0:(2.2)
De…nition 2.4. An estimator b
=t(X1; X2; :::Xn)of is said to be a
minimum variance unbiased estimator of if it has the following two
properties:
a) E(b
) = ; that is, b
is an unbiased estimator,
b) V ar(b
)V ar()for any other estimator =h(X1; X2; :::; Xn),
which is also unbiased for ;that is, E() = :
De…nition 2.5. Let b
=t(X1; X2; :::Xn)and =h(X1; X2; :::; Xn)
be estimators of : We say that the estimator b
is more e¢ cient than the
estimator if
E[(b
)2]E[()2];((2.3))
with strict inequality for some ;  2D:
De…nition 2.6. The relative e¢ ciency of ;with respect to b
; is the
ratio
e(;b
) = E[(b
)2]
E[()2]:(2.4)
In the case of unbiased estimators the ratio is just the ratio of their
variances, that is
e(;b
) = var b
var ;(2.5)
and the most e¢ cient such estimator would be the one with minimum
variance.
Remark 2.1. Consider a family of probability density functions ff(x;); 2Dg:
Let S(X) = Sn(X1; X2; :::; Xn)a random vector where X1; X2; :::; Xnis
a random sample from a distribution having probability density function
f(x;); 2D:Because this random sample is a set of independent, iden-
tically distributed random variables X1; X2; :::; Xn;each with the distrib-
ution of X, and if x1; x2; :::; xnare the observed experimental values of
X1; X2; :::; Xn;then the joint probability density function of X1; X2; :::; Xn;
regarded as a function of ; has the following form
L(x1; x2; :::; xn;) =
n
Y
i=1
f(xi;);  2D;(2.6)
5
where L(x1; x2; :::; xn;)is called the likelihood function of the random
vector Sn(X1; X2; :::; Xn):
A well known means of measuring the quality of b
=t(X1; X2; :::Xn)is
to use the inequality of Cramér-Rao-Fréchet
V ar(b
)[g0()]2
In()=
=[g0()]2
nIX();(2.7)
provided that some regularity conditions concerning the probability density
function f(x;);  2Dare satis…ed; more particularity, it requires the
possibility of di¤erentiating under the integral sign.
De…nition 2.7. The quantity IX(), de…ned by the relation
IX() = Z@ln f(x;)
@ 2
f(x;)dx: (2.8)
is known as Fisher0sinformation measure. measures the information
about g()which is contained in an observation of X:
Remark 2.2. The quantity IX()measures the information about g()
which is contained in an observation of X: Also, the quantity In() = n:IX()
measures the information about g()contained in a random sample Sn(X) =
(X1; X2; :::; Xn);then when Xi; i = 1; n are independent and identically
distributed random variables with density f(x;);  2D:
De…nition 2.8. An unbiased estimator of g()that achieves this mini-
mum from (2.7) is known as an e¢ cient estimator.
3. Fisher’s information measures for the truncated gamma dis-
tributions
Let X be continuous random variable which has an ordinary Gamma dis-
tribution with probability density function (1.1) and X a random variable
which has a Gamma distribution truncated to the left at X=with prob-
ability density function
f (x;a; b) = (0if x < 
ba
(a)[1b(a)] xa1ebx if x ;  0; a; b 2R+;
(3.1)
where (a)is Euler’s Gamma function, b(a)is the incomplete Gamma
function of Euler and A(a; b)has the form (1.9).
6
Theorem 3.1. If X follows a truncated gamma distribution
with probability density function (3.1), where a; b 2R+; aparameter
known,bparameter unknown, then the Fisher information measure
corresponding to X has the following form
IX (b) = a
b+(b)a1eb
(a)[1 b(a)] 2
2a
b+(b)a1eb
(a)[1 b(a)] S1+S2;
(3.2)
where
S1=a[1 b(a+ 1)]
b[1 b(a)] ;(3.2a)
S2=a(a+ 1)
b2
[1 b(a+ 2)]
[1 b(a)] :((3.2b))
Proof. Because X is a continuous random variable it follows that
the Fisher information measure has the form
IX (b) =
1
R
@ln f (x;a;b)
@b 2f (x;a; b)dx;  > 0; a; b 2R+:(3.3)
Using (3.1), we obtain
ln f (x;a; b) = aln bln (a)ln[1 b(a)] (a1) ln xbx: (3.4)
From (3.4), we get
@ln f (x;a; b)
@b =a
bx@ln[1 b(a)]
@b =(3.4a)
=a
bx+(b)a1eb
(a)[1 b(a)] ;(3.4.b)
because, from (1.10), …rst we obtain that
b(a) = 1
(a)
b
Z
0
ta1etdt; 0; a > 0; b > 0;(3.5)
and then
@ln[1 b(a)]
@b =1
(a)[1 b(a)]
@
@b
b
Z
0
ta1etdt =(b)a1eb
(a)[1 b(a)] :
(3.5a)
7
From (3.4b), and making use of the de…nition of IX (b);we obtain
IX (b) =
1
Z
@ln f (x;a; b)
@b 2
f (x;a; b)dx =
=A(a; b)("a2
b2+(b)a1eb
(a)[1 b(a)] 2
+ 2a
b(b)a1eb
(a)[1 b(a)] #I1
2a
b+(b)a1eb
(a)[1 b(a)] I2+I3;(3.6)
where
I1=
1
Z
xa1ebxdx; I2=
1
Z
xxa1ebxdx; I3=
1
Z
x2xa1ebxdx: (3.7)
By a change of variables
t=bx; (3.8)
we obtain
I1=
1
Z
xa1ebxdx =1
ba
1
Z
b
ta1etdt =1
ba2
4(a)
b
Z
0
ta1etdt3
5=
=1
ba[(a)(a):b(a)] ;
that is,
I1=(a)
ba[1 b(a)] =
=1
A(a; b);(3.8)
if we have in view the relations (3.5) and (1.9).
Analogously, for the integrals I2and I3;we obtain
I2=(a+ 1)
ba+1 [1 b(a+ 1)] ; I3=(a+ 2)
ba+2 [1 b(a+ 2)] :(3.8a)
Using these values of the integrals I1; I2and I3from (3.5) we obtain the
following form for the measure IX (b)
8
IX (b) = a2
b2+(b)a1eb
(a)[1 b(a)] 2
+ 2a
b(b)a1eb
(a)[1 b(a)]
2a
b+(b)a1eb
(a)[1 b(a)] a[1 b(a+ 1)]
b[1 b(a)] +a(a+ 1)
b2
[1 b(a+ 2)]
[1 b(a)] ;
(3.9)
that is,
IX (b) = a
b+(b)a1eb
(a)[1 b(a)] 2
2a
b+(b)a1eb
(a)[1 b(a)] S1+S2:
(3.10)
Remark 3.1. If the parameter a2N=f1;2; :::gand the parameter
b2R+;then from the Lemma 1.1, it follows that the constant A(a; b)has
the form
A(a; b) = ba
eb a1
P
k=0
Ak
a1(b)(a1)k
;(3.11)
where Ak
a1has the form (1.6).
Also, from the Lemma 1.3, we obtain the following relations
1b(a) = eb
(a)
a1
X
k=0
Ak
a1(b)(a1)kor (a)[1b(a)] = eb
a1
X
k=0
Ak
a1(b)(a1)k;
(3.12)
1b(a+1) = eb
(a+ 1)
a
X
k=0
Ak
a(b)akor (a+1)[1b(a+1)] = eb
a
X
k=0
Ak
a(b)ak;
(3.12a)
1b(a+ 2) = eb
(a+ 2)
a+1
X
k=0
Ak
a+1(b)(a+1)kor(a+ 2)[1 b (a+ 2)] =
=eb
a+1
X
k=0
Ak
a+1(b)(a+1)k. (3.12b)
In these new conditions, for the S1and S2we obtain expressions
S1=
eb a
P
k=0
Ak
a(b)ak
beb a1
P
k=0
Ak
a1(b)(a1)k
;(3.13)
9
S2=
eb a+1
P
k=0
Ak
a+1(b)(a+1)k
b2eb a1
P
k=0
Ak
a1(b)(a1)k
:(3.14)
Corollary 3.1. If a2N=f1;2; :::gand b2R+;then for the Fisher
information measure, corresponding to X ;we obtain a new form
IX (b) = 2
6
6
6
4
a
b+(b)a1
a1
P
k=0
Ak
a1(b)(a1)k
3
7
7
7
5
2
22
6
6
6
4
a
b+(b)a1
a1
P
k=0
Ak
a1(b)(a1)k
3
7
7
7
5S1+S2;
(3.15)
where S1and S2have the forms (3.13) and (3.14).
Remark 3.2. In the Theorems 1.1 and 1.2 we proved that for the
mean value and for the variance of random variable X have the following
expressions
E(X ) = a[1 b(a+ 1)]
b[1 b(a)] ; a > 0; b > 0;  0;(3.16)
V ar X =a(a+ 1)
b2
[1 b(a+ 2)]
[1 b(a)] a2
b2
[1 b(a+ 1)]2
[1 b(a)]2; a > 0; b > 0;  0:
(3.17)
or
E(X ) = S1=a[1 b(a+ 1)]
b[1 b(a)] ; a > 0; b > 0;  0;(3.16a)
E(X2
) = S2=a(a+ 1)
b2
[1 b(a+ 2)]
[1 b(a)] ; a > 0; b > 0;  0;(3.16b)
if we have in view the relations (3.2a) and (3.2b).
Corollary 3.2. If the parameters a,b2R+;then for the Fisher
information measure, corresponding to X ;we have the following form
IX (b) = a
b+(b)a1eb
(a)[1 b(a)] 2
2a
b+(b)a1eb
(a)[1 b(a)] E(X ) + E(X2
):
(3.18)
Corollary 3.3. If = 0;then we obtain
10
f (x;a; b) = fX(x;a; b) = (0if x 0
ba
(a)xa1ebx if x > 0;; a; b 2Rn;
(3.19)
E(X ) = a
b; E(X2
) = E(X2) = a(a+ 1)
b2; V ar X =V ar X =a
b2;
(3.20)
and for the Fisher information measure, corresponding to a such ordi-
nary gamma distribution;we obtain the following value
IX (b) = IX(b) = V ar X =a
b2:(3.21)
Corollary 3.4. If = 0 and a= 1;then Xis a random variable which
follows a negative exponential distribution and we have
f(x;b) = 0if x 0
bebx if x > 0;; b > 0;(3.22)
E(X) = 1
b; E(X2) = 2
b2; V ar X =1
b2;(3.23)
and for the Fisher information measure we get
IX(b) = V ar X =a
b2:(3.24)
References
[1] Kullback, S., Inf ormation T heory and Statistics;Wiley, New York,
1959.
[2] Mihoc,I., Fatu, C.I., F isher0s Inf ormation M easures for some
T runcated D istributions, Information Theory in Mathematics, Balaton-
lelle, Hungary, July 4-7, 2000 (to appear).
[3] Mihoc,I., Fatu, C.I. F isher0s I nf ormation M easures f or a T runcated
Distribution (I);Proceedings of the „Tiberiu Popoviciu”Itinerant Seminar
of Functional Equations, Approximation and Convexity, Editura SRIMA,
Cluj-Napoca, Romania, 2001, pp.119-131.
[4] Rényi, A., P robability T heory, Akadémiai Kiado, Budapest, 1970.
11
ResearchGate has not been able to resolve any citations for this publication.
Inf ormation T heory and Statistics
  • S Kullback
Kullback, S., Inf ormation T heory and Statistics;
F isher0s Inf ormation M easures f or a T runcated Distribution (I)
  • I Mihoc
  • C I Fatu
Mihoc,I., Fatu, C.I. F isher0s Inf ormation M easures f or a T runcated Distribution (I);Proceedings of the "Tiberiu Popoviciu" Itinerant Seminar of Functional Equations, Approximation and Convexity, Editura SRIMA, Cluj-Napoca, Romania, 2001, pp.119-131.