ArticlePDF Available

Infinite horizon optimal control of forward-backward stochastic system driven by Teugels martingales with Lévy processes

Authors:
  • The Gandhigram Rural Institute - Deemed University, Gandhigram - 624302, Dindigul, Tamilnadu, INDIA

Abstract

In this paper, we consider the infinite horizon nonlinear optimal control of forward-backward stochastic system governed by Teugels martingales associated with Lévy processes and one dimensional independent Brownian motion. Our aim is to establish the sufficient and necessary conditions for optimality of the above stochastic system under the convexity assumptions. Finally an application is given to illustrate the problem of optimal control of stochastic system.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 1–17
Stochastics and Dynamics
Vol. 17, No. 3 (2017) 1750020 (17 pages)
c
World Scientific Publishing Company
DOI: 10.1142/S0219493717500204
Infinite horizon optimal control of forward–backward
stochastic system driven by Teugels martingales
with L´evy processes
P. Muthukumarand R. Deepa
Department of Mathematics, Gandhigram Rural Institute,
Deemed University, Gandhigram – 624 302, Tamilnadu, India
pmuthukumargri@gmail.com
deepa.maths1729@gmail.com
Received 14 November 2015
Revised 2 April 2016
Accepted 17 April 2016
Published 3 June 2016
In this paper, we consider the infinite horizon nonlinear optimal control of forward–
backward stochastic system governed by Teugels martingales associated with L´evy pro-
cesses and one dimensional independent Brownian motion. Our aim is to establish the
sufficient and necessary conditions for optimality of the above stochastic system under
the convexity assumptions. Finally an application is given to illustrate the problem of
optimal control of stochastic system.
Keywords: Optimal control; infinite horizon; forward–backward stochastic system; L´evy
processes; Teugels martingales.
AMS Subject Classification: 49J15, 60J65, 93E20
1. Introduction
In the last few decades, optimal control and its ramifications play a vital role in
many different fields including aerospace, process control, robotics, bioengineer-
ing, economics, finance, and management science and it continues to be an active
research area in control theory [11]. The study of control theory is how a physical
system can be steered to a given goal by applying a control signal. The controller
is heuristically designed in classical control theory, but in optimal control theory,
the controller is selected to minimize a given cost functional [10,11]. Aseev et al.
discussed the Pontryagin maximum principle for optimal control problem in [4].
Optimal control theory with a class of infinite-horizon problems arise in study-
ing models of optimal dynamic allocation of economic resources. In problems of
this sort, the initial state is fixed, constraints are imposed on the behavior of the
admissible trajectories at large times, and the objective functional is given by a
discounted improper integral. Dmitruk et al. consider a broad class of problems on
1750020-1
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 2–17
P. Muthukumar & R. Deepa
infinite horizon including the most of economic dynamical problems and propose
natural conditions that guaranteeing the existence of solutions of these problems
in [7]. The contributors Cartigny et al. [6] assumed the transversality condition to
derive the sufficient optimality condition. Pickenhain et al. [17] use the duality con-
cepts to prove the sufficient conditions for optimality for infinite horizon optimal
control problems. The authors in the above references studied the infinite horizon
control problem in deterministic case only.
Stochastic control theory is a crucial branch of mathematics and has many
important applications. In stochastic control, the system model and parameters
are considered uncertain and the state is only indirectly measured [27]. Stochastic
generalizations of optimal control results are used for determining the control signals
in stochastic optimal control theory [21]. The class of forward–backward stochastic
differential equations (FBSDE) naturally arises in the form of a partially-coupled
system. This has many interesting applications especially in finance like in option
pricing and recursive utility problems (see [19,26]). Wu [24] studied the Pontryagin
type maximum principle for the adjoint equation. FBSDE with infinite time horizon
are still subject to intensive study. Peng et al. [20], and Yin [25], studied the behavior
of the solution process under different set of assumptions both on the coefficients of
FBSDE and on the terminal condition. Maximum principles for an infinite horizon
optimal control problem with partial information was studied in Haadem et al. [8].
A fundamental class of stochastic processes, a L´evy processes is a rich mathe-
matical object and has potential applications. The theory of L´evy processes leads to
tractable and attractive models that perform significantly better than the standard
model. L´evy processes are based on infinitely divisible distribution. In finance, the
infinitely divisible distributions need to be able to represent skewness and excess
kurtosis. The earlier models having these characteristics were proposed for mod-
eling financial data. The underlying normal distribution was replaced by a more
sophisticated infinitely divisible one. L´evy process is studied by many authors in
[3,14,22]. Nualart et al. [15], gave a martingale representation theorem associ-
ated with a class of L´evy processes. The existence and uniqueness of the solution
for backward stochastic differential equations driven by Teugels martingales asso-
ciated with a L´evy processes with moments of all order is derived in [16]. In [9]
the authors studied the finite horizon mean field controlled stochastic differen-
tial equation driven by Teugels martingales associated with some L´evy processes
and an independent Brownian motion. They derived the necessary and sufficient
conditions for the stochastic optimal control problem in the form of a stochastic
maximum principle. Bahlali et al. [5] studied the optimal control of similar kind of
problem under some convexity assumptions. In [28] Zong investigated the antici-
pated backward stochastic differential equations (ABSDEs) driven by the Teugels
martingals associated with L´evy processes. They obtain the existence and unique-
ness of solutions to these equations by means of the fixed-point theorem. Maximum
principles of stochastic system associated with L´evy processes by means of convex
analysis and duality techniques have been studied extensively by many authors (see
1750020-2
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 3–17
Infinite horizon nonlinear optimal control of FBSDE
[13,23] and references therein). Necessary and sufficient condition for optimal con-
trol of stochastic system driven by L´evy processes are proved in [12] by the classical
method of convex variation.
Infinite horizon optimal control problems arise naturally in economics when
dealing with dynamical models of optimal allocation of resources. In [5,9], the
authors studied the stochastic optimal control problem with finite time control
domain. Øksendal et al. [2] deals with the infinite horizon optimal control problem
of forward–backward stochastic delay system and derive the necessary and suffi-
cient maximum principles for optimal control under partial information in infinite
horizon. For this motivation, we construct a forward–backward stochastic system
driven by Teugels martingales associated with L´evy processes [5] by assuming the
time domain in an infinite horizon and derive the necessary and sufficient optimal-
ity condition by using infinite horizon maximum principle for FBSDEs [2]. In this
paper, assumption of the transversality condition (see [1,2] the references therein)
is used to study the infinite horizon optimal control problem. The transversality
conditions at infinity play an important role in applying the Pontryagin maximum
principle. This condition characterize the behavior of adjoint variables at infin-
ity which is also economically meaningful and provide important characteristics of
optimal economic growth.
This paper is organized as follows. In Sec. 2, some preliminaries and notations
about Teugels martingales are provided. The problem of infinite horizon FBSDEs
are formulated in Sec. 3. The main result of this paper, sufficient and necessary
conditions for optimality is derived in Sec. 4. Section 5 gives an illustrative example
by applying the main result.
2. Preliminaries
A real-valued stochastic process L={L(t),t 0}defined in a complete probability
space (Ω,{Ft}t0,P) is called L´evy process, if Lhas a stationary and independent
increments with L(0) = 0,and L(t) is continuous in probability, where {Ft}t0sat-
isfies the usual condition, i.e. the filtration {Ft}0tTis a right continuous increas-
ing family of complete sub σ-algebra of F.TheL´evy processes Lis denoted by
L(t) = limstL(s),t > 0,the left limit processes and by ∆L(t)=L(t)−L(t),
the jump size at time t. The probability law of Lis determined by the one
dimensional distribution of L(t),for any t>0,and this has the characteristic
function,
E[eivL(t)]=etΨ(v),
where vR,and Ψ(v) is the log characteristic function of an infinitely divisible
distribution. The L´evy–Khintchine formula [15] shows that Ψ must take the form
Ψ(v)=iav +σ2
2v2R
[eivx 1ivxI{|x|≤1}]ν(dx),
1750020-3
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 4–17
P. Muthukumar & R. Deepa
where aR,σ > 0, Iis an indicator function. If σ2=0,clearly [L,L](t)=L2(t).
From the L´evy–Khintchine formula, we see that in general a L´evy process consists
of three independent parts: a linear deterministic part, a Brownian part and a pure
jump part. The L´evy measure νdictates how the jumps occur.
The processes Li(t)={Li(t),t0},for i=1,2,...are also L´evy processes and
it is called the power jump processes. It jumps at the same points as the original
evy processes. Now the L´evy measure νdefined in R0(:= R\{0}),corresponding
to the L´evy processes L(t) satisfies the following:
(i) R(1 x2)(dx)<,
(ii) There exists, ε, λ > 0 such that R\(,)eλ|x|ν(dx)<.
This implies that the random variable L(t) has moments of all order, that is
−∞
|x|iν(dx)<,i2.
We assume that Ftis the smallest σ-algebra generated by W(t)andL(t),
i.e.Ft=σ(W(s),s t)σ(L(s),s t)∨N,
where W(·)={W(t)}t0is a one dimensional Brownian motion and Ndenotes
the totality of the P-null set. A convenient basis for martingales representation is
provided by the so-called Teugels martingales. This means that this family has the
predictable representation property. The reader can refer [15,16] for further details
about L´evy processes. We denote by {Hi(t),0tT}
i=1,the Teugels martingales
associated with the evy processes L(t). The family of processes {Hi(t)}
i=1 is given
by
Hi(t)=Ci,iY(i)(t)+Ci,i1Y(i1) (t)+···+Ci,1Y(1) (t),
where Y(i)(t)=L(i)(t)mit, i 1,and m1=E[L(t)] = a+|x|≥1(dx),m
i=
−∞ xiν(dx) for all, i2,L(1)(t)=L(t),and L(i)(t)=0st(∆L(s))ifor
i2andthecoecientsCi,j correspond to the orthonormalization of polynomials
1,x,x
2,... with respect to the measure µ(dx)=x2ν(dx)+σ2δ0(dx).
The Teugels martingales {Hi(t)}
i=1 are pathwise strongly orthogonal and
their predictable quadratic variation processes are given by Hi(t),Hj(t)=
E[Hi(t)Hj(t)] = δij (t) for any i, j, i.e. [Hi(t),Hj(t)] −Hi(t),Hj(t)is a zero expec-
tation martingale with piecewise constant trajectories. Moreover, [H(t)i,Hj(t)]
Hi(t),Hj(t)and the L´evy processes have the same jump times. Here [Hi(t),Hj(t)]
denotes the quadratic variational process corresponding to Hi(·)andHj(·), also
called the bracket process [18]. The martingale Hi(t) is called the orthonormalized
ith power jump process. As a consequence of this construction, every square inte-
grable martingale adapted to a stochastic integral with respect to the Brownian
motion W(t) and sum of stochastic integral with respect to the family {Hi
t}of
Teugels martingales.
1750020-4
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 5–17
Infinite horizon nonlinear optimal control of FBSDE
3. Formulation of the Problem
Let us consider the one dimensional standard Brownian motion W(t)and{Hi(t)}
i=1
is a pathwise strongly orthonormal Teugels martingales associated with some L´evy
processes, having moments of all order. In this paper we study the following infinite
horizon nonlinear optimal control of coupled forward–backward stochastic differen-
tial system driven by Teugels martingales associated with L´evy processes of the
following form:
(i) The forward equation in the unknown measurable processes X(t) is defined as,
dX(t)=b(t, X(t),u(t))dt +f(t, X (t),u(t))dW (t)
+
i=1
δi(t, X(t),u(t))dHi(t),(3.1)
where t[0,)andX(t) denotes limst,s<tX(t).
(ii) The backward equation in the unknown measurable processes Y(t),Z(t)and
θ(t) is defined as,
dY (t)=g(t, X(t),Y(t),Z(t),u(t))dt +Z(t)dW (t)
+
i=1
θi(t)dHi(t),t[0,).(3.2)
We interpret the infinite horizon backward stochastic differential equation (BSDE)
(3.2) for all T<,thetriple(Y(t),Z(t)(t)) solves the equation
Y(t)=Y(T)+T
t
g(s, X(s),Y(s),Z(s),u(s))ds
T
t
Z(s)dW (s)T
t
i=1
θi(s)dHi(s),(3.3)
where Y(t) is bounded a.s., uniformly in t[0,). It is assumed that the processes
(Y(t),Z(t)
i(t)) also satisfy the following equations,
Esup ekt(Y)2(t)+
0
ekt (Z)2(t)+
i=1
(θi)2(t)dt<.
Here X, Y, Z, θi:[0,)R.Let the admissible control set Ube a nonempty
convex subset of R. An admissible control process u(·) is defined as an Ft-predictable
process with values in Usuch that E[
0|u(t)|2dt]<.WedenotebyAthe set
of all admissible control processes u(·). Let b, f , δi
i:[0,)×R×U ×R
and g:[0,)×R3×U×Rand the corresponding solution X(t)ofthegiven
system exist, with
E
0
|X(t)|2dt<.
1750020-5
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 6–17
P. Muthukumar & R. Deepa
The corresponding performance functional for the system (3.1)and(3.2)is,
J(u)=E
0
K(t, X(t),Y(t),Z(t)(t),u(t))dt +h(Y(0)),(3.4)
where K:[0,)×R3×R×U×R,whereRis a set of all functions from R0
to R,h:RRand Ksatisfies
E
0
|K(t, X(t),Y(t),Z(t)(t),u(t))|dt<,u∈A.
The optimal control problem is to find an optimal control u∈Asuch that
sup
u∈A
J(u)=J(u).
The Hamiltonian H:[0,)×R3×R×U ×R3×RRis defined by
H(t, x(t),y(t),z(t)(t),u(t)(t),p(t),q(t),r(t))
=K(t, x(t),y(t),z(t)(t),u(t)) + g(t, x(t),y(t),z(t),u(t))λ(t)
+b(t, x(t),u(t))p(t)+f(t, x(t),u(t))q(t)+
i=1
δi(t, x(t),u(t))ri(t).(3.5)
We assume that the functions fand hare Fechet differentiable (C1) with respect
to the variables (X, Y, Z, θ, u)andthat
E
0
∂b
∂x(t, x(t),u(t))
2
+
∂f
∂x(t, x(t),u(t))
2
+
i=1
∂δi
∂x (t, x(t),u(t))
2
dHi(t)dt<,
E
0
∂b
∂u (t, x(t),u(t))
2
+
∂f
∂u(t, x(t),u(t))
2
+
i=1
∂δi
∂u (t, x(t),u(t))
2
dHi(t)dt<.
Let us introduce the following pair of forward–backward stochastic differential equa-
tions in the adjoint processes λ(t),p(t),q(t),r
i(t), adjoint forward and backward
equations are as follows:
(t)=∂H
∂y (t)dt +∂H
∂z (t)dW (t)+
i=1
∂H
∂θi(t)dHi(t),(3.6)
dp(t)=∂H
∂x (t)dt +q(t)dW (t)+
i=1
ri(t)dHi(t),(3.7)
λ(0) = h(Y(0)).
1750020-6
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 7–17
Infinite horizon nonlinear optimal control of FBSDE
4. Main Results
In this section we have to use the Itˆo formula and some basic assumptions to derive
the sufficient and necessary condition for the infinite horizon optimal control of
stochastic system driven by Teugels martingales associated with L´evy processes.
4.1. The sufficient conditions for optimality
Theorem 4.1. Let ˆu∈Awith corresponding state process ˆ
X(t)and the adjoint
processes ˆ
λ(t),ˆp(t),ˆq(t)and ˆr(t)areassumedtosatisesEqs.(3.6), (3.7).Ifˆuis
assumed to satisfy the following assertions:
(H1) (Concavity)The function x→ h(x)and
(x(t),y(t),z(t)(t),u(t)) → H(x(t),y(t),z(t)(t),u(t),ˆ
λ(t),ˆp(t),ˆq(t),ˆr(t)),
are concave,for all t[0,).
(H2) (Conditional maximum principle)
max
u∈U E[H(ˆ
X(t),ˆ
Y(t),ˆ
Z(t),ˆ
θ(t),v,ˆ
λ(t),ˆp(t),ˆq(t),ˆr(t)) |ξt]
=E[H(ˆ
X(t),ˆ
Y(t),ˆ
Z(t),ˆ
θ(t),ˆu(t),ˆ
λ(t),ˆp(t),ˆq(t),ˆr(t)) |ξt],
where ξt⊆F
tfor all t0be a given subfiltration,representing the informa-
tion available to the controller at time t.
(H3) (Transversality condition)
lim
T→∞ Ep(T)∆ ˆ
X(T)] 0and lim
T→∞ E[ˆ
λ(T)∆ ˆ
Y(T)] 0,
where ˆ
X(T)= ˆ
X(T)X(T),ˆ
Y(T)= ˆ
Y(T)Y(T),then ˆuis an optimal
contro l for the pro blem (3.1)–(3.4), i.e.
Ju)= sup
u∈A
J(u).
Proof. Let us assume that arbitrary u∈A.Then to prove that Ju)J(u)0,
i.e. ˆuis an optimal control.
Since,
J(u)=E
0
K(t, X(t),Y(t),Z(t)(t),u(t))dt +h(Y(0)),
Ju)J(u)=E
0
{K(t, ˆ
X(t),ˆ
Y(t),ˆ
Z(t),ˆ
θ(t),ˆu(t))
K(t, X(t),Y(t),Z(t)(t),u(t))}dt+E[h(ˆ
Y(0)) h(Y(0))]
=J1+J2,(4.1)
where
J1=E
0
{ˆ
K(t)K(t)}dt,J
2=E[h(ˆ
Y(0)) h(Y(0))].
1750020-7
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 8–17
P. Muthukumar & R. Deepa
From the definition of the Hamiltonian function, H(t)in(3.5) can be written
as
K(t)=H(t)g(t)ˆ
λ(t)b(tp(t)f(tq(t)
i=1
δi(t, X(t),u(t)) ˆ
ri(t),
ˆ
K(t)= ˆ
H(t)ˆg(t)ˆ
λ(t)ˆ
b(tp(t)ˆ
f(tq(t)
i=1
ˆ
δi(t, X(t),u(t)) ˆ
ri(t).
J1=E
0(ˆ
H(t)H(t)) ˆ
λ(t)∆ˆg(t)ˆp(t)∆ˆ
b(t)ˆq(t)∆ ˆ
f(t)
i=1
ˆ
δi(t)ˆ
ri(t)dt.(4.2)
Since by (H1), His concave so we have
H(t)ˆ
H(t)ˆ
H
∂x (xˆx)+ˆ
H
∂y (yˆy)+ˆ
H
∂u (uˆu)+ˆ
H
∂z (zˆz)
+
i=1
ˆ
H
∂θi(θi(t)ˆ
θi(t)),
i.e.
ˆ
H(t)H(t)ˆ
H
∂x (t)∆ˆx(t)+ ˆ
H
∂y (t)∆ˆy(t)+ ˆ
H
∂u (t)∆ˆu(t)+ ˆ
H
∂z (t)∆ˆz(t)
+
i=1
ˆ
H
∂θi(t)∆ ˆ
θi(t).
Substitute the above inequality in (4.2), we get
J1E
0ˆ
H
∂x (t)∆ˆx(t)+ ˆ
H
∂y (t)∆ˆy(t)+ ˆ
H
∂z (t)∆ˆz(t)+
i=1
ˆ
H
∂θi(t)∆ ˆ
θi(t)
+ˆ
H
∂u (t)∆ˆu(t)ˆ
λ(t)∆ˆg(t)ˆp(t)∆ˆ
b(t)ˆq(t)∆ ˆ
f(t)
i=1
ˆ
δi(t)ˆ
ri(t)dt.
(4.3)
Now take
J2=E[h(ˆ
Y(0)) h(Y(0))].(4.4)
Also by (H1), his concave so that
h(Y(0)) h(ˆ
Y(0)) h(ˆ
Y(0))(Y(0) ˆ
Y(0)),
=ˆ
λ(0)(Y(0) ˆ
Y(0)),
h(ˆ
Y(0)) h(Y((0)) ˆ
λ(0)( ˆ
Y(0) Y(0)) = ˆ
λ(0)∆ ˆ
Y(0).
E[h(ˆ
Y(0)) h(Y(0))] E[ˆ
λ(0)∆ ˆ
Y(0)],
1750020-8
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 9–17
Infinite horizon nonlinear optimal control of FBSDE
by using the above inequality in (4.4), we have
J2E[ˆ
λ(0)∆ ˆ
Y(0)].(4.5)
Using Itˆo formula for the processes ˆ
λ(t)∆ ˆ
Y(t)on[0,T] and taking E(·), apply (3.2),
(3.6)and(H3) we have
E[ˆ
λ(T)∆ ˆ
Y(T)] E[ˆ
λ(0)∆ ˆ
Y(0)] = ET
0ˆ
H(t)
∂y ˆ
Y(t)ˆ
λ(t)∆ˆg(t)
+∂H(t)
∂z ˆ
Z(t)+
i=1
ˆ
H(t)
∂θiˆ
θi(t)dt.
Taking T→∞,weget
E[ˆ
λ(0)∆ ˆ
Y(0)] = lim
T→∞ E[ˆ
λ(T)∆ ˆ
Y(T)] E
0ˆ
H(t)
∂y ˆ
Y(t)ˆ
λ(t)∆ˆg(t)
+∂H(t)
∂z ˆ
Z(t)+
i=1
ˆ
H(t)
∂θiˆ
θi(t)dt.(4.6)
Substituting (4.6)into(4.5), we have
J2lim
T→∞ E[ˆ
λ(T)∆ ˆ
Y(T)] + E
0ˆ
H(t)
∂y ˆ
Y(t)+ˆ
λ(t)∆ˆg(t)
∂H(t)
∂z ˆ
Z(t)
i=1
ˆ
H(t)
∂θiˆ
θi(t)dt.(4.7)
Substituting (4.3)and(4.7)into(4.1), we get
Ju)J(u)=J1+J2,
lim
T→∞ E[ˆ
λ(T)∆ ˆ
Y(T)] + E
0ˆ
H(t)
∂x ˆ
X(t)+ˆ
H(t)
∂u ∆ˆu(t)
ˆp(t)∆ˆ
b(t)ˆq(t)∆ ˆ
f(t)
i=1
ˆ
δi(t)ˆ
ri(t)dt.(4.8)
Now apply Itˆoformulato ˆp(t)∆ ˆ
X(t)on[0,T] and taking E(·), using (3.1), (3.7),
and (H3), we have
Ep(T)∆ ˆ
X(T)] = ET
0ˆ
H(t)
∂x ˆ
X(t)+ˆp(t)∆ˆ
b(t)+ˆq(t)∆ ˆ
f(t)
+
i=1
ˆ
σi(t)ˆ
ri(t)dt.
1750020-9
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 10–17
P. Muthukumar & R. Deepa
Taking T→∞,wehave
lim
T→∞ Ep(T)∆ ˆ
X(T)] = E
0ˆ
H(t)
∂x ˆ
X(t)+ˆp(t)∆ˆ
b(t)+ˆq(t)∆ ˆ
f(t)
+
i=1
ˆ
σi(t)ˆ
ri(t)dt.
Substituting the above inequality in (4.8), we have
Ju)J(u)E
0ˆ
H(t)
∂u ∆ˆu(t)dt,
E
0ˆ
H(t)
∂u
ξ(t)∆ˆu(t)dt,
0,
so we concluded that Ju)J(u)0, i.e. ˆuis an optimal control.
4.2. Necessary conditions of optimality
Theorem 4.2. Suppose that ˆu∈Awith corresponding solutions ˆ
X(t),ˆ
Y(t),ˆ
Z(t),
ˆ
θ(t),ˆ
λ(t),ˆp(t),ˆq(t)and ˆr(t)of Eqs. (3.1), (3.2), (3.6)and (3.7), if we assume that
the following conditions hold:
(H4) For al l t0(0,), > 0and all bounded εt0-measurable random variables
α, the control processes ω(t)defined by ω(t)=αI[t0,t0+)(t)∈A,where
I[t0,t0+)(t)=1if t[t, t +),
0otherwise.
(H5) For a l l u, ω ∈Awhere ωis bounded,there exists ε>0,such that
u(t)+(t)∈A,s(ε, ε),t[0,).
(H6) limT→∞ Ep(T)ξ(T)] = 0 and limT→∞ E[ˆ
λ(T)Φ(T)] = 0,
then the following assertions are equivalent:
(i) For a l l bo u nd ed ω∈A,d
ds Ju+)|s=0 =0.
(ii) For a l l t[0,)
E
∂uH(t, ˆ
X(t),ˆ
Y(t),ˆ
Z(t),ˆ
θ(t),u,ˆ
λ(t),ˆp(t),ˆq(t),ˆr(t)) |ξ(t)uu(t)
=0.
Proof. (i) (ii) Let us define the following derivative processes,
d
ds Xu+(t)|s=0 =ξ(t),
d
ds Yu+(t)|s=0 (t),
1750020-10
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 11–17
Infinite horizon nonlinear optimal control of FBSDE
d
ds Zu+(t)|s=0 =η(t),
d
ds θu+(t)|s=0 =ζ(t),
and
E
0
∂K(t)
∂x ξ(t)
+
∂K(t)
∂y Φ(t)
+
∂K(t)
∂z η(t)
+
∂K(t)
∂u ω(t)
+
i=1
∂K(t)
∂θiζi(t)dt<.
Assume that (i) holds,
d
ds Ju+)|s=0 =0.(4.9)
But
d
ds Ju+)s=0
=E
0∂K(t)
∂x ξ(t)+∂K(t)
∂y Φ(t)+∂K(t)
∂z η(t)
+∂K(t)
∂u ω(t)+
i=1
∂K(t)
∂θiζi(t)dt +h(ˆ
Y(0))Φ(0).(4.10)
Using (4.9)in(4.10)gives,
E
0∂K(t)
∂x ξ(t)+∂K(t)
∂y Φ(t)+∂K(t)
∂z η(t)
+∂K(t)
∂u ω(t)+
i=1
∂K(t)
∂θiζi(t)dt +h(ˆ
Y(0))Φ(0)=0.(4.11)
Let us find
(t)=∂b(t)
∂x ξ(t)+∂b(t)
∂u ω(t)dt +∂f(t)
∂x ξ(t)+∂f(t)
∂u ω(t)dW (t)
+
i=1 ∂δi(t)
∂x ξ(t)+∂δi(t)
∂u ω(t)dHi(t) (4.12)
and
dΦ(t)=∂g(t)
∂x ξ(t)∂g(t)
∂y Φ(t)∂g(t)
∂z η(t)∂g(t)
∂u ω(t)dt +η(t)dW (t)
+
i=1
ζi(t)dHi(t).(4.13)
1750020-11
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 12–17
P. Muthukumar & R. Deepa
Applying the Itˆo formula to the term ˆ
λ(t)Φ(t)on[0,T] and taking E(·), using (3.6)
and (4.13) implies that
E[ˆ
λ(T)Φ(T)] E[ˆ
λ(0)Φ(0)]
=ET
0ˆ
λ(t)∂g(t)
∂x ξ(t)∂g(t)
∂y Φ(t)∂g(t)
∂z η(t)∂g(t)
∂u ω(t)
(t)ˆ
H(t)
∂y +η(t)ˆ
H(t)
∂z +
i=1
ˆ
H(t)
∂θiζi(t)dt.
Taking T→∞, the above inequality becomes
E[ˆ
λ(0)Φ(0)] = lim
T→∞ E[ˆ
λ(T)Φ(T)]
lim
T→∞ ET
0ˆ
λ(t)∂g(t)
∂x ξ(t)∂g(t)
∂y Φ(t)∂g(t)
∂z η(t)
∂g(t)
∂u ω(t)(t)ˆ
H(t)
∂y +η(t)ˆ
H(t)
∂z +
i=1
ˆ
H(t)
∂θiζi(t)dt.
Since h(ˆ
Y(0)) = ˆ
λ(0), which implies that E[h(ˆ
Y(0))Φ(0)] = E[ˆ
λ(0)Φ(0)], and
using (H6) the above inequality can be written as
E[h(ˆ
Y(0))Φ(0)]
=lim
T→∞ ET
0ˆ
λ(t)∂g(t)
∂x ξ(t)∂g(t)
∂y Φ(t)∂g(t)
∂z η(t)
∂g(t)
∂u ω(t)+ˆ
H(t)
∂y Φ(t)+ˆ
H(t)
∂z η(t)+
i=1
ˆ
H(t)
∂θiζi(t)dt.(4.14)
Substituting Eq. (4.14)into(4.11)thenwehave,
d
ds Ju+)s=0
=E
0∂K(t)
∂x ξ(t)+∂K(t)
∂y Φ(t)+∂K(t)
∂z η(t)+∂K(t)
∂u ω(t)
+
i=1
∂K(t)
∂θiζi(t)+ˆ
λ(t)∂g(t)
∂x ξ(t)+ ∂g(t)
∂y Φ(t)+ ∂g(t)
∂z η(t)+ ∂g(t)
∂u ω(t)
Φ(t)ˆ
H(t)
∂y η(t)ˆ
H(t)
∂z
i=1
ˆ
H(t)
∂θiζi(t)dt=0.(4.15)
1750020-12
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 13–17
Infinite horizon nonlinear optimal control of FBSDE
Applying Itˆo formula to the process ˆp(t)ξ(t)on[0,T] and taking E(·), using (3.7),
(4.12), (H6), then
Ep(T)ξ(T)] = ET
0∂H(t)
∂x ξ(t)+ˆp(t)∂b(t)
∂x ξ(t)+∂b(t)
∂u ω(t)
q(t)∂f(t)
∂x ξ(t)+∂f(t)
∂u ω(t)
+
i=1
ˆr(t)∂δi(t)
∂x ξ(t)+∂δi(t)
∂u ω(t)dHi(t)dt.
Taking T→∞, the above equation can be written as,
lim
T→∞ Ep(T)ξ(T)] = E
0∂H(t)
∂x ξ(t)+ˆp(t)∂b(t)
∂x ξ(t)+∂b(t)
∂u ω(t)
q(t)∂f(t)
∂x ξ(t)+∂f(t)
∂u ω(t)
+
i=1
ˆr(t)∂δi(t)
∂x ξ(t)+∂δi(t)
∂u ω(t)dHi(t)dt.
Since by (H6), limT→∞Ep(T)ξ(T)] = 0,then the above equation becomes
E
0∂H(t)
∂x ξ(t)+ˆp(t)∂b(t)
∂x ξ(t)+∂b(t)
∂u ω(t)+ˆq(t)∂f(t)
∂x ξ(t)
+∂f(t)
∂u ω(t)+
i=1
ˆr(t)∂δi(t)
∂x ξ(t)+∂δi(t)
∂u ω(t)dHi(t)dt=0.(4.16)
By Hamiltonian (3.5), we have
∂H(t)
∂x =∂K(t)
∂x +∂g(t)
∂x λ(t)+∂b(t)
∂x p(t)+∂f(t)
∂x q(t)+
i=1
∂δi(t)
∂x ri(t)
and also one can write easily for ∂H
∂y ,∂H
∂z ,∂H
∂u .Substituting the above equality in
(4.16), and using (4.15), we get,
E
0
∂H(t)
∂u ω(t)dt=0.
Now apply ω(t)=αI[s,s+h)(t),where αis bounded and ξt0-measurable, st0.
Then we get
Es+h
s
∂H(s)
∂u dsα=0.
Differentiating with respect to hat h= 0. Then we obtain
E∂H(s)
∂u α=0.
1750020-13
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 14–17
P. Muthukumar & R. Deepa
Since this holds for all s0andallα, we conclude
E∂H(t0)
∂u
ξt0=0.
This proves that (i) (ii).
Now we have to prove that (ii) (i).
By retracing the above argument we can get
d
ds Ju+)s=0
=0.
Hence the proof is complete.
5. Example
Let us consider the example. Assume that given cost functional is
J(u)=E
0
eρt (u(t)x(t))γ
γdt.
The forward–backward stochastic differential equation of the form is
dX(t)=X(t)(b0(t)u(t))dt +f0(t)dW (t)+
i=1
δi(t)dHi(t),t0 (5.1)
dY (t)=(α(t)y+lnu)dt +Z(t)dW(t)+
i=1
ri(t)dHi(t),(5.2)
X(0) = x, x 0.(5.3)
where γ(0,1) and ρ>0.Now the Hamiltonian is
H(t, x, y, z, u, p, q, r)=eρt (u(t)x(t))γ
γ+λ(t)(α(t)y+lnu)+X(t)
×(b0(t)u(t))p(t)+X(t)f0(t)q(t)+X(t)
i=1
δi(t)ri(t).
We find ˆusuch that supu∈AJ(u)=Ju).
Differentiating Hwith respect to u, gives the first order condition, that is we
have
Eeρt X(t)γ(u(t))γ1+λ(t)
u(t)X(t)p(t)
ξt=0
EeρtX(t)γ(u(t))γ1+λ(t)
u(t)ξt=E[X(t)p(t)|ξt].(5.4)
The pair of adjoint equation is given by
(t)=α(t)λ(t)dt, (5.5)
1750020-14
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 15–17
Infinite horizon nonlinear optimal control of FBSDE
dp(t)=eρtuγxγ1+(b0(t)u(t))p(t)+f0(t)q(t)+
i=1
δi(t)ri(t)dt
+q(t)dW (t)+
i=1
ri(t)dHi(t),t(0,),(5.6)
λ(0) = 1
and
Esup ekt(Y)2(t)+
0
ekt(Z)2(t)dt +
i=1
(θi)2(t)<.
By Theorem 5.1 in [1], the adjoint backward stochastic differential equation (5.5)
and (5.6) provides a pair of solutions λ(t)andp(t).Now take Eq. (5.5), which
implies that
(t)
λ(t)=α(t)dt.
Now integrate this equation from 0 to t,andtakeα(t)asaconstantvalueα,then
we have
λ(t)=eαt.(5.7)
Substituting (5.7)into(5.4), we get
EeρtXγuγ1+eαt
u(t)ξt=E[X(t)p(t)|ξt].
In (5.7), take t=Tand limT→∞λ(T)=0,implies that E[limT→∞ λ(T)∆X(T)] 0
holds. Hence the transversality condition of Theorem 4.1 (H3) holds. Also the
hypothesis (H1) and (H2) holds. Thus Theorem 4.1 tells that there exist ˆuwhich
is an optimal solution of system (5.1)–(5.3).
Acknowledgments
The authors would like to express their sincere thanks to the editor and anonymous
reviewers for helpful comments and suggestions to improve the quality of this paper.
This work was supported by SERB YSS project, New Delhi, Govt. of India. F.No:
YSS/2014/000447 dated 20-11-2015. The second author is thankful to UGC, New
Delhi for providing BSR fellowship during 2016.
References
1. N. Agram, S. Haadem, B. Øksendal and F. Proske, A maximum principle for infinite
horizon delay equations, SIAM J. Math. Anal. 45 (2013) 2499–2522.
2. N. Agram and B. Øksendal, Infinite horizon optimal control of forward–backward
stochastic differential equations with delay, J. Comput. Appl. Math. 259 (2014) 336–
349.
1750020-15
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 16–17
P. Muthukumar & R. Deepa
3. D. Applebaum, evy Processes and Stochastic Calculus (Cambridge Univ. Press,
2009).
4. S. M. Aseev and A. V. Kryazhimskii, The Pontryagin maximum principle and optimal
economic growth problems, Proc. Steklov Inst. Math. 257 (2007) 1–255.
5. K. Bahlali, N. Khelfallah and B. Mezerdi, Optimality conditions for partial informa-
tion stochastic control problems driven by L´evy processes, Syst. Control. Lett. 61
(2012) 1079–1084.
6. P. Cartigny and P. Michel, On a sufficient transversality condition for infinite horizon
optimal control problems, Automatica. 39 (2003) 1007–1010.
7. A. V. Dmitruk and N. V. Kuz’kina, Existence theorem in the optimal control problem
on an infinite time interval, Math. Notes 78 (2005) 466–480.
8. S. Haadem, B. Øksendal and F. Proske, Maximum principles for jump diffusion pro-
cesses with infinite horizon, Automatica. 49 (2013) 2267–2275.
9. M. Hafayed, A. Abba and S. Abbas, On partial-information optimal singular control
problem for mean-field stochastic differential equations driven by Teugels martingales
measures, Internat. J. Control. 89 (2016) 397–410.
10. D. E. Kirk, Optimal Control Theory: An Introduction (Dover, 2004).
11. E. B. Lee and L. Markus, Foundations of Optimal Control Theory (Wiley, 1967).
12. Q. X. Meng and M. N. Tang, Necessary and sufficient conditions for optimal control
of stochastic systems associated with L´evy processes, Sci. China. Ser. F. Inf. Sci. 52
(2009) 1982–1992.
13. Q. X. Meng, Z. Fu and M. N. Tang, Maximum principle for backward stochastic
systems associated with L´evy processes under partial information, in Proc.ofthe
31st Chinese Control Conference, Hefei, China (IEEE, 2012), pp. 25–27.
14. K. Mitsui and Y. Tabata, A stochastic linear-quadratic problem with L´evy processes
and its application to finance, Stoch. Proc. Appl. 118 (2008) 120–152.
15. D. Nualart and W. Schoutens, Chaotic and predictable representations for evy pro-
cesses, Stoch. Proc. Appl. 90 (2000) 109–122.
16. D. Nualart and W. Schoutens, Backward stochastic differential equations and
Fey n m a n - K a c f o rmula for L´evy processes, with applications in finance, Bernoulli 7
(2001) 761–776.
17. S. Pickenhain and V. Lykina, Sufficiency conditions for infinite horizon optimal control
problems, in Recent Advances in Optimization (Springer, 2006), pp. 217–232.
18. P. Protter, Stochastic Integration and Differential Equations (Springer, 1990).
19. J. Shi and Z. Wu, Maximum principle for forward–backward stochastic control system
with random jumps and applications to finance, J. Syst. Sci. Complex 23 (2010) 219–
231.
20. P. Shige and Y. Shi, Infinite horizon forward–backward stochastic differential equa-
tions, Stoch. Proc. Appl. 85 (2000) 75–92.
21. R. F. Stengel, Stochastic Optimal Control Theory and Application (John Wiley &
Sons, 1986).
22. H. Tang and Z. Wu, Stochastic differential equations and stochastic linear quadratic
optimal control problem with L´evy processes, J. Syst. Sci. Complex. 22 (2009) 122–
136.
23. M. N. Tang and Q. Zhang, Optimal variational principle for backward stochastic
control systems associated with L´evy processes, Sci. China. Math. 55 (2012) 745–
761.
24. Z. Wu, A general maximum principle for optimal control of forward–backward stochas-
tic systems, Automatica. 49 (2013) 1473–1480.
1750020-16
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 17–17
Infinite horizon nonlinear optimal control of FBSDE
25. J. Yin, On solutions of a class of infinite horizon FBSDEs, Stat. Probab. Lett. 78
(2008) 2412–2419.
26. J. Yong, Forward–backward stochastic differential equations with mixed initialtermi-
nal conditions, Trans. Amer. Math. Soc. 362 (2010) 1047–1096.
27. J. Yong and X. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations
(Springer, 1999).
28. G. Zong, Anticipated backward stochastic differential equations driven by the Teugels
martingales, J. Math. Anal. Appl. 412 (2014) 989–997.
1750020-17
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
... Therefore the present work of this manuscript focused on optimal control of mean field type stochastic differential game with an infinite horizon time. For more details related to infinite horizon optimal control problems see [1,2,10,13,15] and references therein. ...
... and the hypothesis (H 1)-(H 3) are holds, then (û 1 ,û 2 ) is an optimal control for the system (2.1)-(2.3).Proof. Proof similar to Theorem 4.1 in[15]. ...
Article
This paper analyzes the optimal control of mean field type forward-backward non-zero sum stochastic delay differential game with Poisson random measure over infinite time horizon. Further, infinite horizon version of stochastic maximum principle and necessary condition for optimality are established under the transversality conditions and the assumption of convex control domain. Finally, the Nash equilibrium for optimization problem in financial market is presented to illustrate the theoretical study.
... Subsequently, Huang et al. [27] studied this control system with terminal state constraints and obtained the corresponding necessary maximum principle using Ekeland's variational. For more recent conclusions about the stochastic control problem driven by Lévy process, please refer to [28][29][30]. ...
... Next, we prove (26), (27),, (28); it can be easily checked that ...
Article
Full-text available
This paper analyzes one kind of optimal control problem which is described by forward-backward stochastic differential equations with Lévy process (FBSDEL). We derive a necessary condition for the existence of the optimal control by means of spike variational technique, while the control domain is not necessarily convex. Simultaneously, we also get the maximum principle for this control system when there are some initial and terminal state constraints. Finally, a financial example is discussed to illustrate the application of our result.
... Moreover, Orrieri et al. [25] presented a stochastic maximum principle for ergodic control problem and gave the necessary and sufficient conditions for optimality for controlled dissipative systems. More related infinite horizon optimal control problems, see Agram et al. [1], Agram and Øksendal [2], Muthukumar and Deepa [21], Yang and Wu [45], Ma and Liu [18], Wei and Yu [42], Mei et al. [20] and the references therein. ...
Article
Full-text available
This paper is concerned with a discounted optimal control problem of partially observed forward-backward stochastic systems with jumps on infinite horizon. The control domain is convex and a kind of infinite horizon observation equation is introduced. The uniquely solvability of infinite horizon forward (backward) stochastic differential equation with jumps is obtained and more extended analyses, especially for the backward case, are made. Some new estimates are first given and proved for the critical variational inequality. Then a maximum principle is obtained by introducing some infinite horizon adjoint equations whose uniquely solvabilities are guaranteed necessarily. Finally, some comparisons are made with two kinds of representative infinite horizon stochastic systems and their related optimal controls.
... Moreover, Orrieri et al. [21] presented a stochastic maximum principle for ergodic control problem and gave the necessary and sufficient conditions for optimality for controlled dissipative systems. More related infinite horizon optimal control problems, see Agram et al. [1], Agram and Øksendal [2], Muthukumar and Deepa [19], Yang and Wu [40], Ma and Liu [16], Wei and Yu [37], Mei et al. [18] and the references therein. ...
Preprint
Full-text available
This paper is concerned with a discounted optimal control problem of partially observed forward-backward stochastic systems with jumps on infinite horizon. The control domain is convex and a kind of infinite horizon observation equation is introduced. The uniquely solvability of infinite horizon forward (backward) stochastic differential equation with jumps is obtained and more extended analysis, especially for the backward case, is made. Some new estimates are first given and proved for the critical variational inequality. Then an ergodic maximum principle is obtained by introducing some infinite horizon adjoint equations whose uniquely solvabilities are guaranteed necessarily. Finally, some comparison are made with two kinds of representative infinite horizon stochastic systems and their related optimal controls.
... Meanwhile, mean-field partial differential equation models of large number of stochastic systems with average effect of interaction have been studied in literature [2], [17], [6], [16]. A maximum principle for optimal control of mean-field type stochastic systems have established in [7], [12]. However there is no existing work on optimal control of mean-field stochastic partial differential equation with Poisson jumps models. ...
... In particular, discounted stochastic optimal control problem is studied by Maslowski and Veverka (2014). Moreover, stochastic maximum principle for an infinite horizon optimal control problem is studied by Ma and Liu (2017), Muthukumar and Deepa (2017). ...
... The general maximum principle for stochastic differential equations (SDEs) has been studied by Meng and Tang. 3 An optimal control problem for backward stochastic differential equations (BSDEs) associated with Lévy processes has been investigated by Meng et al. 4 Infinite horizon optimal control problem of forward-backward stochastic system driven by Teugels martingales associated with Lévy processes has been studied by Muthukumar and Deepa. 5 The history of mean-field-type SDEs, also known as McKean-Vlasov systems, can be traced back to the works by Kac 6 in 1956 and McKean 7 in 1966 on stochastic systems with a large number of interacting particles. Optimal control problems for McKean-Vlasov-type SDEs have been studied by many authors; see, for example, other works. ...
Article
Full-text available
In this paper, we study stochastic optimal control problem for general McKean-Vlasov–type forward-backward differential equations driven by Teugels martingales, associated with some Lévy process having moments of all orders, and an independent Brownian motion. The coefficients of the system depend on the state of the solution process as well as of its probability law and the control variable. We establish a set of necessary conditions in the form of Pontryagin maximum principle for the optimal control. We also give additional conditions, under which the necessary optimality conditions turn out to be sufficient. The proof of our main result is based on the differentiability with respect to probability law and a corresponding Itô formula.
... Infinite horizon systems are modeled through this type of problems [3]. The idea of infinite horizon optimal control problems is studied by Agram et al. [1], Haadem et al. [8], Ma and Liu [17] and Muthukumar et al. [21]. In order to combine the above key ideas with infinite horizon, the authors constructed the dynamical system of infinite horizon, mean-field stochastic delay system with Teugels martingales. ...
Article
Full-text available
This paper focuses on optimal control of nonlinear stochastic delay system constructed through Teugels martingales associated with Lévy processes and standard Brownian motion, in which finite horizon is extended to infinite horizon. In order to describe the interacting many-body system, the expectation values of state processes are added to the concerned system. Further, sufficient and necessary conditions are established under convexity assumptions of the control domain. Finally, an example is given to demonstrate the application of the theory.
Article
Full-text available
This article investigates the optimal control problem of nonzero sum game mean-field delayed Markov regime-switching forward-backward stochastic system with Lévy processes associated with Teugels martingales over the infinite time horizon. Based on the transversality conditions, assumption of convex control domain, infinite-horizon version of stochastic maximum principle (Nash equilibrium), and necessary condition for optimality are established. Finally, the Nash equilibrium for the optimization problem in the financial market is considered to illustrate the observed theoretical results. KEYWORDS infinite-horizon, mean-field, Nash equilibrium, optimal control, regime-switching, Teugels martingales.
Article
In this article, we discuss an infinite horizon optimal control of the stochastic system with partial information, where the state is governed by a mean‐field stochastic differential delay equation driven by Teugels martingales associated with Lévy processes and an independent Brownian motion. First, we show the existence and uniqueness theorem for an infinite horizon mean‐field anticipated backward stochastic differential equation driven by Teugels martingales. Then applying different approaches for the underlying system, we establish two classes of stochastic maximum principles, which include two necessary conditions and two sufficient conditions for optimality, under a convex control domain. Moreover, compared with the finite horizon optimal control, we add the transversality conditions to the two kinds of stochastic maximum principles. Finally, using the stochastic maximum principle II, we settle an infinite horizon optimal consumption problem driven by Teugels martingales associated with Gamma processes.
Book
Lévy processes form a wide and rich class of random process, and have many applications ranging from physics to finance. Stochastic calculus is the mathematics of systems interacting with random noise. Here, the author ties these two subjects together, beginning with an introduction to the general theory of Lévy processes, then leading on to develop the stochastic calculus for Lévy processes in a direct and accessible way. This fully revised edition now features a number of new topics. These include: regular variation and subexponential distributions; necessary and sufficient conditions for Lévy processes to have finite moments; characterisation of Lévy processes with finite variation; Kunita's estimates for moments of Lévy type stochastic integrals; new proofs of Ito representation and martingale representation theorems for general Lévy processes; multiple Wiener-Lévy integrals and chaos decomposition; an introduction to Malliavin calculus; an introduction to stability theory for Lévy-driven SDEs.
Article
I Preliminaries.- II Semimartingales and Stochastic Integrals.- III Semimartingales and Decomposable Processes.- IV General Stochastic Integration and Local Times.- V Stochastic Differential Equations.- VI Expansion of Filtrations.- References.
Article
This paper is concerned with partial-information mixed optimal stochastic continuous–singular control problem for mean-field stochastic differential equation driven by Teugels martingales and an independent Brownian motion, where the Teugels martingales are a family of pairwise strongly orthonormalmartingales associated with Lévy processes. The control variable has two components; the first being absolutely continuous, and the second singular. Partial-information necessary and sufficient conditions of optimal continuous–singular control for these mean-field models are investigated. As an illustration, this paper studies a partial-information linear quadratic control problem of mean-field type involving continuous–singular control.
Conference Paper
This paper studies an optimal control problem for a backward stochastic control systems associated with Lévy processes under partial information. More precisely, the controlled systems are described by backward stochastic differential equations driven by Teugels martingales and an independent multi-dimensional Brownian motion, where Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes, and all admissible control processes are required to be adapted to a given subfiltration of the filtration generated by the underlying Teugels martingales and Brownian motion. For this type of partial information stochastic optimal control problem with convex control domain, we derive the necessary and sufficient conditions for the existence of the optimal control by means of convex analysis and duality techniques. As an application, the optimal control problem of linear backward stochastic differential equation with a quadratic cost criteria (called backward linear-quadratic problem, or BLQ problem for short) under partial information is discussed and the unique optimal control is characterized explicitly by adjoint processes.
Article
In this paper, a class of anticipated backward stochastic differential equations driven by Teugels martingales associated with Lévy process is investigated. We obtain the existence and uniqueness of solutions to these equations by means of the fixed-point theorem. We show that a comparison theorem for this type of ABSDEs also holds under some slight stronger conditions.
Article
In this paper, we consider a partial information stochastic control problem where the system is governed by a nonlinear stochastic differential equation driven by Teugels martingales associated with some Lévy process and an independent Brownian motion. We prove optimality necessary conditions in the form of a maximum principle. These conditions turn out to be sufficient under some convexity assumptions. To illustrate the general results, an example is solved.
Article
A general maximum principle for optimal control problems derived by forward–backward stochastic systems is established, where control domains are non-convex and forward diffusion coefficients explicitly depend on control variables. These optimal control problems have broad applications in mathematical finance and economics such as the recursive mean–variance portfolio choice problems. The maximum principle is applied to study a forward–backward linear-quadratic optimal control problem with a non-convex control domain; an optimal solution is obtained.
Article
We consider a problem of optimal control of an infinite horizon system governed by forward-backward stochastic differential equations with delay. Sufficient and necessary maximum principles for optimal control under partial information in infinite horizon are derived. We illustrate our results by an application to a problem of optimal consumption with respect to recursive utility from a cash flow with delay.