Content uploaded by Deepa Ravi
Author content
All content in this area was uploaded by Deepa Ravi on Jan 25, 2018
Content may be subject to copyright.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 1–17
Stochastics and Dynamics
Vol. 17, No. 3 (2017) 1750020 (17 pages)
c
World Scientific Publishing Company
DOI: 10.1142/S0219493717500204
Infinite horizon optimal control of forward–backward
stochastic system driven by Teugels martingales
with L´evy processes
P. Muthukumar∗and R. Deepa†
Department of Mathematics, Gandhigram Rural Institute,
Deemed University, Gandhigram – 624 302, Tamilnadu, India
∗
pmuthukumargri@gmail.com
†
deepa.maths1729@gmail.com
Received 14 November 2015
Revised 2 April 2016
Accepted 17 April 2016
Published 3 June 2016
In this paper, we consider the infinite horizon nonlinear optimal control of forward–
backward stochastic system governed by Teugels martingales associated with L´evy pro-
cesses and one dimensional independent Brownian motion. Our aim is to establish the
sufficient and necessary conditions for optimality of the above stochastic system under
the convexity assumptions. Finally an application is given to illustrate the problem of
optimal control of stochastic system.
Keywords: Optimal control; infinite horizon; forward–backward stochastic system; L´evy
processes; Teugels martingales.
AMS Subject Classification: 49J15, 60J65, 93E20
1. Introduction
In the last few decades, optimal control and its ramifications play a vital role in
many different fields including aerospace, process control, robotics, bioengineer-
ing, economics, finance, and management science and it continues to be an active
research area in control theory [11]. The study of control theory is how a physical
system can be steered to a given goal by applying a control signal. The controller
is heuristically designed in classical control theory, but in optimal control theory,
the controller is selected to minimize a given cost functional [10,11]. Aseev et al.
discussed the Pontryagin maximum principle for optimal control problem in [4].
Optimal control theory with a class of infinite-horizon problems arise in study-
ing models of optimal dynamic allocation of economic resources. In problems of
this sort, the initial state is fixed, constraints are imposed on the behavior of the
admissible trajectories at large times, and the objective functional is given by a
discounted improper integral. Dmitruk et al. consider a broad class of problems on
1750020-1
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 2–17
P. Muthukumar & R. Deepa
infinite horizon including the most of economic dynamical problems and propose
natural conditions that guaranteeing the existence of solutions of these problems
in [7]. The contributors Cartigny et al. [6] assumed the transversality condition to
derive the sufficient optimality condition. Pickenhain et al. [17] use the duality con-
cepts to prove the sufficient conditions for optimality for infinite horizon optimal
control problems. The authors in the above references studied the infinite horizon
control problem in deterministic case only.
Stochastic control theory is a crucial branch of mathematics and has many
important applications. In stochastic control, the system model and parameters
are considered uncertain and the state is only indirectly measured [27]. Stochastic
generalizations of optimal control results are used for determining the control signals
in stochastic optimal control theory [21]. The class of forward–backward stochastic
differential equations (FBSDE) naturally arises in the form of a partially-coupled
system. This has many interesting applications especially in finance like in option
pricing and recursive utility problems (see [19,26]). Wu [24] studied the Pontryagin
type maximum principle for the adjoint equation. FBSDE with infinite time horizon
are still subject to intensive study. Peng et al. [20], and Yin [25], studied the behavior
of the solution process under different set of assumptions both on the coefficients of
FBSDE and on the terminal condition. Maximum principles for an infinite horizon
optimal control problem with partial information was studied in Haadem et al. [8].
A fundamental class of stochastic processes, a L´evy processes is a rich mathe-
matical object and has potential applications. The theory of L´evy processes leads to
tractable and attractive models that perform significantly better than the standard
model. L´evy processes are based on infinitely divisible distribution. In finance, the
infinitely divisible distributions need to be able to represent skewness and excess
kurtosis. The earlier models having these characteristics were proposed for mod-
eling financial data. The underlying normal distribution was replaced by a more
sophisticated infinitely divisible one. L´evy process is studied by many authors in
[3,14,22]. Nualart et al. [15], gave a martingale representation theorem associ-
ated with a class of L´evy processes. The existence and uniqueness of the solution
for backward stochastic differential equations driven by Teugels martingales asso-
ciated with a L´evy processes with moments of all order is derived in [16]. In [9]
the authors studied the finite horizon mean field controlled stochastic differen-
tial equation driven by Teugels martingales associated with some L´evy processes
and an independent Brownian motion. They derived the necessary and sufficient
conditions for the stochastic optimal control problem in the form of a stochastic
maximum principle. Bahlali et al. [5] studied the optimal control of similar kind of
problem under some convexity assumptions. In [28] Zong investigated the antici-
pated backward stochastic differential equations (ABSDEs) driven by the Teugels
martingals associated with L´evy processes. They obtain the existence and unique-
ness of solutions to these equations by means of the fixed-point theorem. Maximum
principles of stochastic system associated with L´evy processes by means of convex
analysis and duality techniques have been studied extensively by many authors (see
1750020-2
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 3–17
Infinite horizon nonlinear optimal control of FBSDE
[13,23] and references therein). Necessary and sufficient condition for optimal con-
trol of stochastic system driven by L´evy processes are proved in [12] by the classical
method of convex variation.
Infinite horizon optimal control problems arise naturally in economics when
dealing with dynamical models of optimal allocation of resources. In [5,9], the
authors studied the stochastic optimal control problem with finite time control
domain. Øksendal et al. [2] deals with the infinite horizon optimal control problem
of forward–backward stochastic delay system and derive the necessary and suffi-
cient maximum principles for optimal control under partial information in infinite
horizon. For this motivation, we construct a forward–backward stochastic system
driven by Teugels martingales associated with L´evy processes [5] by assuming the
time domain in an infinite horizon and derive the necessary and sufficient optimal-
ity condition by using infinite horizon maximum principle for FBSDEs [2]. In this
paper, assumption of the transversality condition (see [1,2] the references therein)
is used to study the infinite horizon optimal control problem. The transversality
conditions at infinity play an important role in applying the Pontryagin maximum
principle. This condition characterize the behavior of adjoint variables at infin-
ity which is also economically meaningful and provide important characteristics of
optimal economic growth.
This paper is organized as follows. In Sec. 2, some preliminaries and notations
about Teugels martingales are provided. The problem of infinite horizon FBSDEs
are formulated in Sec. 3. The main result of this paper, sufficient and necessary
conditions for optimality is derived in Sec. 4. Section 5 gives an illustrative example
by applying the main result.
2. Preliminaries
A real-valued stochastic process L={L(t),t ≥0}defined in a complete probability
space (Ω,{Ft}t≥0,P) is called L´evy process, if Lhas a stationary and independent
increments with L(0) = 0,and L(t) is continuous in probability, where {Ft}t≥0sat-
isfies the usual condition, i.e. the filtration {Ft}0≤t≤Tis a right continuous increas-
ing family of complete sub σ-algebra of F.TheL´evy processes Lis denoted by
L(t−) = lims→tL(s),t > 0,the left limit processes and by ∆L(t)=L(t)−L(t−),
the jump size at time t. The probability law of Lis determined by the one
dimensional distribution of L(t),for any t>0,and this has the characteristic
function,
E[eivL(t)]=etΨ(v),
where v∈R,and Ψ(v) is the log characteristic function of an infinitely divisible
distribution. The L´evy–Khintchine formula [15] shows that Ψ must take the form
Ψ(v)=−iav +σ2
2v2−R
[eivx −1−ivxI{|x|≤1}]ν(dx),
1750020-3
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 4–17
P. Muthukumar & R. Deepa
where a∈R,σ > 0, Iis an indicator function. If σ2=0,clearly [L,L](t)=L2(t).
From the L´evy–Khintchine formula, we see that in general a L´evy process consists
of three independent parts: a linear deterministic part, a Brownian part and a pure
jump part. The L´evy measure νdictates how the jumps occur.
The processes Li(t)={Li(t),t≥0},for i=1,2,...are also L´evy processes and
it is called the power jump processes. It jumps at the same points as the original
L´evy processes. Now the L´evy measure νdefined in R0(:= R\{0}),corresponding
to the L´evy processes L(t) satisfies the following:
(i) R(1 ∧x2)∨(dx)<∞,
(ii) There exists, ε, λ > 0 such that R\(−,)eλ|x|ν(dx)<∞.
This implies that the random variable L(t) has moments of all order, that is
∞
−∞
|x|iν(dx)<∞,∀i≥2.
We assume that Ftis the smallest σ-algebra generated by W(t)andL(t),
i.e.Ft=σ(W(s),s ≤t)∨σ(L(s),s ≤t)∨N,
where W(·)={W(t)}t≥0is a one dimensional Brownian motion and Ndenotes
the totality of the P-null set. A convenient basis for martingales representation is
provided by the so-called Teugels martingales. This means that this family has the
predictable representation property. The reader can refer [15,16] for further details
about L´evy processes. We denote by {Hi(t),0≤t≤T}∞
i=1,the Teugels martingales
associated with the L´evy processes L(t). The family of processes {Hi(t)}∞
i=1 is given
by
Hi(t)=Ci,iY(i)(t)+Ci,i−1Y(i−1) (t)+···+Ci,1Y(1) (t),
where Y(i)(t)=L(i)(t)−mit, i ≥1,and m1=E[L(t)] = a+|x|≥1xν(dx),m
i=
∞
−∞ xiν(dx) for all, i≥2,L(1)(t)=L(t),and L(i)(t)=0≤s≤t(∆L(s))ifor
i≥2andthecoefficientsCi,j correspond to the orthonormalization of polynomials
1,x,x
2,... with respect to the measure µ(dx)=x2ν(dx)+σ2δ0(dx).
The Teugels martingales {Hi(t)}∞
i=1 are pathwise strongly orthogonal and
their predictable quadratic variation processes are given by Hi(t),Hj(t)=
E[Hi(t)Hj(t)] = δij (t) for any i, j, i.e. [Hi(t),Hj(t)] −Hi(t),Hj(t)is a zero expec-
tation martingale with piecewise constant trajectories. Moreover, [H(t)i,Hj(t)] −
Hi(t),Hj(t)and the L´evy processes have the same jump times. Here [Hi(t),Hj(t)]
denotes the quadratic variational process corresponding to Hi(·)andHj(·), also
called the bracket process [18]. The martingale Hi(t) is called the orthonormalized
ith power jump process. As a consequence of this construction, every square inte-
grable martingale adapted to a stochastic integral with respect to the Brownian
motion W(t) and sum of stochastic integral with respect to the family {Hi
t}of
Teugels martingales.
1750020-4
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 5–17
Infinite horizon nonlinear optimal control of FBSDE
3. Formulation of the Problem
Let us consider the one dimensional standard Brownian motion W(t)and{Hi(t)}∞
i=1
is a pathwise strongly orthonormal Teugels martingales associated with some L´evy
processes, having moments of all order. In this paper we study the following infinite
horizon nonlinear optimal control of coupled forward–backward stochastic differen-
tial system driven by Teugels martingales associated with L´evy processes of the
following form:
(i) The forward equation in the unknown measurable processes X(t) is defined as,
dX(t)=b(t, X(t),u(t))dt +f(t, X (t),u(t))dW (t)
+
∞
i=1
δi(t, X(t−),u(t))dHi(t),(3.1)
where t∈[0,∞)andX(t−) denotes lims→t,s<tX(t).
(ii) The backward equation in the unknown measurable processes Y(t),Z(t)and
θ(t) is defined as,
dY (t)=−g(t, X(t),Y(t),Z(t),u(t))dt +Z(t)dW (t)
+
∞
i=1
θi(t)dHi(t),t∈[0,∞).(3.2)
We interpret the infinite horizon backward stochastic differential equation (BSDE)
(3.2) for all T<∞,thetriple(Y(t),Z(t),θ(t)) solves the equation
Y(t)=Y(T)+T
t
g(s, X(s),Y(s),Z(s),u(s))ds
−T
t
Z(s)dW (s)−T
t
∞
i=1
θi(s)dHi(s),(3.3)
where Y(t) is bounded a.s., uniformly in t∈[0,∞). It is assumed that the processes
(Y(t),Z(t),θ
i(t)) also satisfy the following equations,
Esup ekt(Y)2(t)+∞
0
ekt (Z)2(t)+
∞
i=1
(θi)2(t)dt<∞.
Here X, Y, Z, θi:[0,∞)→R.Let the admissible control set Ube a nonempty
convex subset of R. An admissible control process u(·) is defined as an Ft-predictable
process with values in Usuch that E[∞
0|u(t)|2dt]<∞.WedenotebyAthe set
of all admissible control processes u(·). Let b, f , δi,θ
i:[0,∞)×R×U ×Ω→R
and g:[0,∞)×R3×U×Ω→Rand the corresponding solution X(t)ofthegiven
system exist, with
E∞
0
|X(t)|2dt<∞.
1750020-5
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 6–17
P. Muthukumar & R. Deepa
The corresponding performance functional for the system (3.1)and(3.2)is,
J(u)=E∞
0
K(t, X(t),Y(t),Z(t),θ(t),u(t))dt +h(Y(0)),(3.4)
where K:[0,∞)×R3×R×U×Ω→R,whereRis a set of all functions from R0
to R,h:R→Rand Ksatisfies
E∞
0
|K(t, X(t),Y(t),Z(t),θ(t),u(t))|dt<∞,∀u∈A.
The optimal control problem is to find an optimal control u∗∈Asuch that
sup
u∈A
J(u)=J(u∗).
The Hamiltonian H:[0,∞)×R3×R×U ×R3×R→Ris defined by
H(t, x(t),y(t),z(t),θ(t),u(t),λ(t),p(t),q(t),r(t))
=K(t, x(t),y(t),z(t),θ(t),u(t)) + g(t, x(t),y(t),z(t),u(t))λ(t)
+b(t, x(t),u(t))p(t)+f(t, x(t),u(t))q(t)+
∞
i=1
δi(t, x(t−),u(t))ri(t).(3.5)
We assume that the functions fand hare Fr´echet differentiable (C1) with respect
to the variables (X, Y, Z, θ, u)andthat
E∞
0
∂b
∂x(t, x(t),u(t))
2
+
∂f
∂x(t, x(t),u(t))
2
+
∞
i=1
∂δi
∂x (t, x(t−),u(t))
2
dHi(t)dt<∞,
E∞
0
∂b
∂u (t, x(t),u(t))
2
+
∂f
∂u(t, x(t),u(t))
2
+
∞
i=1
∂δi
∂u (t, x(t−),u(t))
2
dHi(t)dt<∞.
Let us introduce the following pair of forward–backward stochastic differential equa-
tions in the adjoint processes λ(t),p(t),q(t),r
i(t), adjoint forward and backward
equations are as follows:
dλ(t)=∂H
∂y (t)dt +∂H
∂z (t)dW (t)+
∞
i=1
∂H
∂θi(t)dHi(t),(3.6)
dp(t)=−∂H
∂x (t)dt +q(t)dW (t)+
∞
i=1
ri(t)dHi(t),(3.7)
λ(0) = h(Y(0)).
1750020-6
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 7–17
Infinite horizon nonlinear optimal control of FBSDE
4. Main Results
In this section we have to use the Itˆo formula and some basic assumptions to derive
the sufficient and necessary condition for the infinite horizon optimal control of
stochastic system driven by Teugels martingales associated with L´evy processes.
4.1. The sufficient conditions for optimality
Theorem 4.1. Let ˆu∈Awith corresponding state process ˆ
X(t)and the adjoint
processes ˆ
λ(t),ˆp(t),ˆq(t)and ˆr(t)areassumedtosatisfiesEqs.(3.6), (3.7).Ifˆuis
assumed to satisfy the following assertions:
(H1) (Concavity)The function x→ h(x)and
(x(t),y(t),z(t),θ(t),u(t)) → H(x(t),y(t),z(t),θ(t),u(t),ˆ
λ(t),ˆp(t),ˆq(t),ˆr(t)),
are concave,for all t∈[0,∞).
(H2) (Conditional maximum principle)
max
u∈U E[H(ˆ
X(t),ˆ
Y(t),ˆ
Z(t),ˆ
θ(t),v,ˆ
λ(t),ˆp(t),ˆq(t),ˆr(t)) |ξt]
=E[H(ˆ
X(t),ˆ
Y(t),ˆ
Z(t),ˆ
θ(t),ˆu(t),ˆ
λ(t),ˆp(t),ˆq(t),ˆr(t)) |ξt],
where ξt⊆F
tfor all t≥0be a given subfiltration,representing the informa-
tion available to the controller at time t.
(H3) (Transversality condition)
lim
T→∞ E[ˆp(T)∆ ˆ
X(T)] ≤0and lim
T→∞ E[ˆ
λ(T)∆ ˆ
Y(T)] ≥0,
where ∆ˆ
X(T)= ˆ
X(T)−X(T),∆ˆ
Y(T)= ˆ
Y(T)−Y(T),then ˆuis an optimal
contro l for the pro blem (3.1)–(3.4), i.e.
J(ˆu)= sup
u∈A
J(u).
Proof. Let us assume that arbitrary u∈A.Then to prove that J(ˆu)−J(u)≥0,
i.e. ˆuis an optimal control.
Since,
J(u)=E∞
0
K(t, X(t),Y(t),Z(t),θ(t),u(t))dt +h(Y(0)),
J(ˆu)−J(u)=E∞
0
{K(t, ˆ
X(t),ˆ
Y(t),ˆ
Z(t),ˆ
θ(t),ˆu(t))
−K(t, X(t),Y(t),Z(t),θ(t),u(t))}dt+E[h(ˆ
Y(0)) −h(Y(0))]
=J1+J2,(4.1)
where
J1=E∞
0
{ˆ
K(t)−K(t)}dt,J
2=E[h(ˆ
Y(0)) −h(Y(0))].
1750020-7
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 8–17
P. Muthukumar & R. Deepa
From the definition of the Hamiltonian function, H(t)in(3.5) can be written
as
K(t)=H(t)−g(t)ˆ
λ(t)−b(t)ˆp(t)−f(t)ˆq(t)−
∞
i=1
δi(t, X(t−),u(t)) ˆ
ri(t),
ˆ
K(t)= ˆ
H(t)−ˆg(t)ˆ
λ(t)−ˆ
b(t)ˆp(t)−ˆ
f(t)ˆq(t)−
∞
i=1
ˆ
δi(t, X(t−),u(t)) ˆ
ri(t).
J1=E∞
0(ˆ
H(t)−H(t)) −ˆ
λ(t)∆ˆg(t)−ˆp(t)∆ˆ
b(t)−ˆq(t)∆ ˆ
f(t)
−
∞
i=1
∆ˆ
δi(t)ˆ
ri(t)dt.(4.2)
Since by (H1), His concave so we have
H(t)−ˆ
H(t)≤∂ˆ
H
∂x (x−ˆx)+∂ˆ
H
∂y (y−ˆy)+∂ˆ
H
∂u (u−ˆu)+∂ˆ
H
∂z (z−ˆz)
+
∞
i=1
∂ˆ
H
∂θi(θi(t)−ˆ
θi(t)),
i.e.
ˆ
H(t)−H(t)≥∂ˆ
H
∂x (t)∆ˆx(t)+ ∂ˆ
H
∂y (t)∆ˆy(t)+ ∂ˆ
H
∂u (t)∆ˆu(t)+ ∂ˆ
H
∂z (t)∆ˆz(t)
+
∞
i=1
∂ˆ
H
∂θi(t)∆ ˆ
θi(t).
Substitute the above inequality in (4.2), we get
J1≥E∞
0∂ˆ
H
∂x (t)∆ˆx(t)+ ∂ˆ
H
∂y (t)∆ˆy(t)+ ∂ˆ
H
∂z (t)∆ˆz(t)+
∞
i=1
∂ˆ
H
∂θi(t)∆ ˆ
θi(t)
+∂ˆ
H
∂u (t)∆ˆu(t)−ˆ
λ(t)∆ˆg(t)−ˆp(t)∆ˆ
b(t)−ˆq(t)∆ ˆ
f(t)−
∞
i=1
∆ˆ
δi(t)ˆ
ri(t)dt.
(4.3)
Now take
J2=E[h(ˆ
Y(0)) −h(Y(0))].(4.4)
Also by (H1), his concave so that
h(Y(0)) −h(ˆ
Y(0)) ≤h(ˆ
Y(0))(Y(0) −ˆ
Y(0)),
=ˆ
λ(0)(Y(0) −ˆ
Y(0)),
h(ˆ
Y(0)) −h(Y((0)) ≥ˆ
λ(0)( ˆ
Y(0) −Y(0)) = ˆ
λ(0)∆ ˆ
Y(0).
E[h(ˆ
Y(0)) −h(Y(0))] ≥E[ˆ
λ(0)∆ ˆ
Y(0)],
1750020-8
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 9–17
Infinite horizon nonlinear optimal control of FBSDE
by using the above inequality in (4.4), we have
J2≥E[ˆ
λ(0)∆ ˆ
Y(0)].(4.5)
Using Itˆo formula for the processes ˆ
λ(t)∆ ˆ
Y(t)on[0,T] and taking E(·), apply (3.2),
(3.6)and(H3) we have
E[ˆ
λ(T)∆ ˆ
Y(T)] −E[ˆ
λ(0)∆ ˆ
Y(0)] = ET
0∂ˆ
H(t)
∂y ∆ˆ
Y(t)−ˆ
λ(t)∆ˆg(t)
+∂H(t)
∂z ∆ˆ
Z(t)+
∞
i=1
∂ˆ
H(t)
∂θi∆ˆ
θi(t)dt.
Taking T→∞,weget
E[ˆ
λ(0)∆ ˆ
Y(0)] = lim
T→∞ E[ˆ
λ(T)∆ ˆ
Y(T)] −E∞
0∂ˆ
H(t)
∂y ∆ˆ
Y(t)−ˆ
λ(t)∆ˆg(t)
+∂H(t)
∂z ∆ˆ
Z(t)+
∞
i=1
∂ˆ
H(t)
∂θi∆ˆ
θi(t)dt.(4.6)
Substituting (4.6)into(4.5), we have
J2≥lim
T→∞ E[ˆ
λ(T)∆ ˆ
Y(T)] + E∞
0−∂ˆ
H(t)
∂y ∆ˆ
Y(t)+ˆ
λ(t)∆ˆg(t)
−∂H(t)
∂z ∆ˆ
Z(t)−
∞
i=1
∂ˆ
H(t)
∂θi∆ˆ
θi(t)dt.(4.7)
Substituting (4.3)and(4.7)into(4.1), we get
J(ˆu)−J(u)=J1+J2,
≥lim
T→∞ E[ˆ
λ(T)∆ ˆ
Y(T)] + E∞
0∂ˆ
H(t)
∂x ∆ˆ
X(t)+∂ˆ
H(t)
∂u ∆ˆu(t)
−ˆp(t)∆ˆ
b(t)−ˆq(t)∆ ˆ
f(t)−
∞
i=1
∆ˆ
δi(t)ˆ
ri(t)dt.(4.8)
Now apply Itˆoformulato ˆp(t)∆ ˆ
X(t)on[0,T] and taking E(·), using (3.1), (3.7),
and (H3), we have
E[ˆp(T)∆ ˆ
X(T)] = ET
0−∂ˆ
H(t)
∂x ∆ˆ
X(t)+ˆp(t)∆ˆ
b(t)+ˆq(t)∆ ˆ
f(t)
+
∞
i=1
∆ˆ
σi(t)ˆ
ri(t)dt.
1750020-9
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 10–17
P. Muthukumar & R. Deepa
Taking T→∞,wehave
lim
T→∞ E[ˆp(T)∆ ˆ
X(T)] = E∞
0−∂ˆ
H(t)
∂x ∆ˆ
X(t)+ˆp(t)∆ˆ
b(t)+ˆq(t)∆ ˆ
f(t)
+
∞
i=1
∆ˆ
σi(t)ˆ
ri(t)dt.
Substituting the above inequality in (4.8), we have
J(ˆu)−J(u)≥E∞
0∂ˆ
H(t)
∂u ∆ˆu(t)dt,
≥E∞
0∂ˆ
H(t)
∂u
ξ(t)∆ˆu(t)dt,
≥0,
so we concluded that J(ˆu)−J(u)≥0, i.e. ˆuis an optimal control.
4.2. Necessary conditions of optimality
Theorem 4.2. Suppose that ˆu∈Awith corresponding solutions ˆ
X(t),ˆ
Y(t),ˆ
Z(t),
ˆ
θ(t),ˆ
λ(t),ˆp(t),ˆq(t)and ˆr(t)of Eqs. (3.1), (3.2), (3.6)and (3.7), if we assume that
the following conditions hold:
(H4) For al l t0∈(0,∞), > 0and all bounded εt0-measurable random variables
α, the control processes ω(t)defined by ω(t)=αI[t0,t0+)(t)∈A,where
I[t0,t0+)(t)=1if t∈[t, t +),
0otherwise.
(H5) For a l l u, ω ∈Awhere ωis bounded,there exists ε>0,such that
u(t)+sω(t)∈A,∀s∈(−ε, ε),t∈[0,∞).
(H6) limT→∞ E[ˆp(T)ξ(T)] = 0 and limT→∞ E[ˆ
λ(T)Φ(T)] = 0,
then the following assertions are equivalent:
(i) For a l l bo u nd ed ω∈A,d
ds J(ˆu+sω)|s=0 =0.
(ii) For a l l t∈[0,∞)
E∂
∂uH(t, ˆ
X(t),ˆ
Y(t),ˆ
Z(t),ˆ
θ(t),u,ˆ
λ(t),ˆp(t),ˆq(t),ˆr(t)) |ξ(t)u=ˆu(t)
=0.
Proof. (i) ⇒(ii) Let us define the following derivative processes,
d
ds Xu+sω(t)|s=0 =ξ(t),
d
ds Yu+sω (t)|s=0 =Φ(t),
1750020-10
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 11–17
Infinite horizon nonlinear optimal control of FBSDE
d
ds Zu+sω(t)|s=0 =η(t),
d
ds θu+sω(t)|s=0 =ζ(t),
and
E∞
0
∂K(t)
∂x ξ(t)
+
∂K(t)
∂y Φ(t)
+
∂K(t)
∂z η(t)
+
∂K(t)
∂u ω(t)
+
∞
i=1
∂K(t)
∂θiζi(t)dt<∞.
Assume that (i) holds,
d
ds J(ˆu+sω)|s=0 =0.(4.9)
But
d
ds J(ˆu+sω)s=0
=E∞
0∂K(t)
∂x ξ(t)+∂K(t)
∂y Φ(t)+∂K(t)
∂z η(t)
+∂K(t)
∂u ω(t)+
∞
i=1
∂K(t)
∂θiζi(t)dt +h(ˆ
Y(0))Φ(0).(4.10)
Using (4.9)in(4.10)gives,
E∞
0∂K(t)
∂x ξ(t)+∂K(t)
∂y Φ(t)+∂K(t)
∂z η(t)
+∂K(t)
∂u ω(t)+
∞
i=1
∂K(t)
∂θiζi(t)dt +h(ˆ
Y(0))Φ(0)=0.(4.11)
Let us find
dξ(t)=∂b(t)
∂x ξ(t)+∂b(t)
∂u ω(t)dt +∂f(t)
∂x ξ(t)+∂f(t)
∂u ω(t)dW (t)
+
∞
i=1 ∂δi(t)
∂x ξ(t)+∂δi(t)
∂u ω(t)dHi(t) (4.12)
and
dΦ(t)=−∂g(t)
∂x ξ(t)−∂g(t)
∂y Φ(t)−∂g(t)
∂z η(t)−∂g(t)
∂u ω(t)dt +η(t)dW (t)
+
∞
i=1
ζi(t)dHi(t).(4.13)
1750020-11
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 12–17
P. Muthukumar & R. Deepa
Applying the Itˆo formula to the term ˆ
λ(t)Φ(t)on[0,T] and taking E(·), using (3.6)
and (4.13) implies that
E[ˆ
λ(T)Φ(T)] −E[ˆ
λ(0)Φ(0)]
=ET
0ˆ
λ(t)−∂g(t)
∂x ξ(t)−∂g(t)
∂y Φ(t)−∂g(t)
∂z η(t)−∂g(t)
∂u ω(t)
+Φ(t)∂ˆ
H(t)
∂y +η(t)∂ˆ
H(t)
∂z +
∞
i=1
∂ˆ
H(t)
∂θiζi(t)dt.
Taking T→∞, the above inequality becomes
E[ˆ
λ(0)Φ(0)] = lim
T→∞ E[ˆ
λ(T)Φ(T)]
−lim
T→∞ ET
0ˆ
λ(t)−∂g(t)
∂x ξ(t)−∂g(t)
∂y Φ(t)−∂g(t)
∂z η(t)
−∂g(t)
∂u ω(t)+Φ(t)∂ˆ
H(t)
∂y +η(t)∂ˆ
H(t)
∂z +
∞
i=1
∂ˆ
H(t)
∂θiζi(t)dt.
Since h(ˆ
Y(0)) = ˆ
λ(0), which implies that E[h(ˆ
Y(0))Φ(0)] = E[ˆ
λ(0)Φ(0)], and
using (H6) the above inequality can be written as
E[h(ˆ
Y(0))Φ(0)]
=−lim
T→∞ ET
0ˆ
λ(t)−∂g(t)
∂x ξ(t)−∂g(t)
∂y Φ(t)−∂g(t)
∂z η(t)
−∂g(t)
∂u ω(t)+∂ˆ
H(t)
∂y Φ(t)+∂ˆ
H(t)
∂z η(t)+
∞
i=1
∂ˆ
H(t)
∂θiζi(t)dt.(4.14)
Substituting Eq. (4.14)into(4.11)thenwehave,
d
ds J(ˆu+sω)s=0
=E∞
0∂K(t)
∂x ξ(t)+∂K(t)
∂y Φ(t)+∂K(t)
∂z η(t)+∂K(t)
∂u ω(t)
+
∞
i=1
∂K(t)
∂θiζi(t)+ˆ
λ(t)∂g(t)
∂x ξ(t)+ ∂g(t)
∂y Φ(t)+ ∂g(t)
∂z η(t)+ ∂g(t)
∂u ω(t)
−Φ(t)∂ˆ
H(t)
∂y −η(t)∂ˆ
H(t)
∂z −
∞
i=1
∂ˆ
H(t)
∂θiζi(t)dt=0.(4.15)
1750020-12
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 13–17
Infinite horizon nonlinear optimal control of FBSDE
Applying Itˆo formula to the process ˆp(t)ξ(t)on[0,T] and taking E(·), using (3.7),
(4.12), (H6), then
E[ˆp(T)ξ(T)] = ET
0−∂H(t)
∂x ξ(t)+ˆp(t)∂b(t)
∂x ξ(t)+∂b(t)
∂u ω(t)
+ˆq(t)∂f(t)
∂x ξ(t)+∂f(t)
∂u ω(t)
+
∞
i=1
ˆr(t)∂δi(t)
∂x ξ(t)+∂δi(t)
∂u ω(t)dHi(t)dt.
Taking T→∞, the above equation can be written as,
lim
T→∞ E[ˆp(T)ξ(T)] = E∞
0−∂H(t)
∂x ξ(t)+ˆp(t)∂b(t)
∂x ξ(t)+∂b(t)
∂u ω(t)
+ˆq(t)∂f(t)
∂x ξ(t)+∂f(t)
∂u ω(t)
+
∞
i=1
ˆr(t)∂δi(t)
∂x ξ(t)+∂δi(t)
∂u ω(t)dHi(t)dt.
Since by (H6), limT→∞E[ˆp(T)ξ(T)] = 0,then the above equation becomes
E∞
0−∂H(t)
∂x ξ(t)+ˆp(t)∂b(t)
∂x ξ(t)+∂b(t)
∂u ω(t)+ˆq(t)∂f(t)
∂x ξ(t)
+∂f(t)
∂u ω(t)+
∞
i=1
ˆr(t)∂δi(t)
∂x ξ(t)+∂δi(t)
∂u ω(t)dHi(t)dt=0.(4.16)
By Hamiltonian (3.5), we have
∂H(t)
∂x =∂K(t)
∂x +∂g(t)
∂x λ(t)+∂b(t)
∂x p(t)+∂f(t)
∂x q(t)+
∞
i=1
∂δi(t)
∂x ri(t)
and also one can write easily for ∂H
∂y ,∂H
∂z ,∂H
∂u .Substituting the above equality in
(4.16), and using (4.15), we get,
E∞
0
∂H(t)
∂u ω(t)dt=0.
Now apply ω(t)=αI[s,s+h)(t),where αis bounded and ξt0-measurable, s≥t0.
Then we get
Es+h
s
∂H(s)
∂u dsα=0.
Differentiating with respect to hat h= 0. Then we obtain
E∂H(s)
∂u α=0.
1750020-13
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 14–17
P. Muthukumar & R. Deepa
Since this holds for all s≥0andallα, we conclude
E∂H(t0)
∂u
ξt0=0.
This proves that (i) ⇒(ii).
Now we have to prove that (ii) ⇒(i).
By retracing the above argument we can get
d
ds J(ˆu+sω)s=0
=0.
Hence the proof is complete.
5. Example
Let us consider the example. Assume that given cost functional is
J(u)=E∞
0
e−ρt (u(t)x(t))γ
γdt.
The forward–backward stochastic differential equation of the form is
dX(t)=X(t)(b0(t)−u(t))dt +f0(t)dW (t)+
∞
i=1
δi(t)dHi(t),t≥0 (5.1)
dY (t)=−(−α(t)y+lnu)dt +Z(t)dW(t)+
∞
i=1
ri(t)dHi(t),(5.2)
X(0) = x, x ≥0.(5.3)
where γ∈(0,1) and ρ>0.Now the Hamiltonian is
H(t, x, y, z, u, p, q, r)=e−ρt (u(t)x(t))γ
γ+λ(t)(−α(t)y+lnu)+X(t)
×(b0(t)−u(t))p(t)+X(t)f0(t)q(t)+X(t)
∞
i=1
δi(t)ri(t).
We find ˆusuch that supu∈AJ(u)=J(ˆu).
Differentiating Hwith respect to u, gives the first order condition, that is we
have
Ee−ρt X(t)γ(u(t))γ−1+λ(t)
u(t)−X(t)p(t)
ξt=0
⇒Ee−ρtX(t)γ(u(t))γ−1+λ(t)
u(t)ξt=E[X(t)p(t)|ξt].(5.4)
The pair of adjoint equation is given by
dλ(t)=−α(t)λ(t)dt, (5.5)
1750020-14
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 15–17
Infinite horizon nonlinear optimal control of FBSDE
dp(t)=−e−ρtuγxγ−1+(b0(t)−u(t))p(t)+f0(t)q(t)+
∞
i=1
δi(t)ri(t)dt
+q(t)dW (t)+
∞
i=1
ri(t)dHi(t),t∈(0,∞),(5.6)
λ(0) = 1
and
Esup ekt(Y)2(t)+∞
0
ekt(Z)2(t)dt +
∞
i=1
(θi)2(t)<∞.
By Theorem 5.1 in [1], the adjoint backward stochastic differential equation (5.5)
and (5.6) provides a pair of solutions λ(t)andp(t).Now take Eq. (5.5), which
implies that
dλ(t)
λ(t)=−α(t)dt.
Now integrate this equation from 0 to t,andtakeα(t)asaconstantvalueα,then
we have
λ(t)=e−αt.(5.7)
Substituting (5.7)into(5.4), we get
Ee−ρtXγuγ−1+e−αt
u(t)ξt=E[X(t)p(t)|ξt].
In (5.7), take t=Tand limT→∞λ(T)=0,implies that E[limT→∞ λ(T)∆X(T)] ≥0
holds. Hence the transversality condition of Theorem 4.1 (H3) holds. Also the
hypothesis (H1) and (H2) holds. Thus Theorem 4.1 tells that there exist ˆuwhich
is an optimal solution of system (5.1)–(5.3).
Acknowledgments
The authors would like to express their sincere thanks to the editor and anonymous
reviewers for helpful comments and suggestions to improve the quality of this paper.
This work was supported by SERB YSS project, New Delhi, Govt. of India. F.No:
YSS/2014/000447 dated 20-11-2015. The second author is thankful to UGC, New
Delhi for providing BSR fellowship during 2016.
References
1. N. Agram, S. Haadem, B. Øksendal and F. Proske, A maximum principle for infinite
horizon delay equations, SIAM J. Math. Anal. 45 (2013) 2499–2522.
2. N. Agram and B. Øksendal, Infinite horizon optimal control of forward–backward
stochastic differential equations with delay, J. Comput. Appl. Math. 259 (2014) 336–
349.
1750020-15
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 16–17
P. Muthukumar & R. Deepa
3. D. Applebaum, L´evy Processes and Stochastic Calculus (Cambridge Univ. Press,
2009).
4. S. M. Aseev and A. V. Kryazhimskii, The Pontryagin maximum principle and optimal
economic growth problems, Proc. Steklov Inst. Math. 257 (2007) 1–255.
5. K. Bahlali, N. Khelfallah and B. Mezerdi, Optimality conditions for partial informa-
tion stochastic control problems driven by L´evy processes, Syst. Control. Lett. 61
(2012) 1079–1084.
6. P. Cartigny and P. Michel, On a sufficient transversality condition for infinite horizon
optimal control problems, Automatica. 39 (2003) 1007–1010.
7. A. V. Dmitruk and N. V. Kuz’kina, Existence theorem in the optimal control problem
on an infinite time interval, Math. Notes 78 (2005) 466–480.
8. S. Haadem, B. Øksendal and F. Proske, Maximum principles for jump diffusion pro-
cesses with infinite horizon, Automatica. 49 (2013) 2267–2275.
9. M. Hafayed, A. Abba and S. Abbas, On partial-information optimal singular control
problem for mean-field stochastic differential equations driven by Teugels martingales
measures, Internat. J. Control. 89 (2016) 397–410.
10. D. E. Kirk, Optimal Control Theory: An Introduction (Dover, 2004).
11. E. B. Lee and L. Markus, Foundations of Optimal Control Theory (Wiley, 1967).
12. Q. X. Meng and M. N. Tang, Necessary and sufficient conditions for optimal control
of stochastic systems associated with L´evy processes, Sci. China. Ser. F. Inf. Sci. 52
(2009) 1982–1992.
13. Q. X. Meng, Z. Fu and M. N. Tang, Maximum principle for backward stochastic
systems associated with L´evy processes under partial information, in Proc.ofthe
31st Chinese Control Conference, Hefei, China (IEEE, 2012), pp. 25–27.
14. K. Mitsui and Y. Tabata, A stochastic linear-quadratic problem with L´evy processes
and its application to finance, Stoch. Proc. Appl. 118 (2008) 120–152.
15. D. Nualart and W. Schoutens, Chaotic and predictable representations for L´evy pro-
cesses, Stoch. Proc. Appl. 90 (2000) 109–122.
16. D. Nualart and W. Schoutens, Backward stochastic differential equations and
Fey n m a n - K a c f o rmula for L´evy processes, with applications in finance, Bernoulli 7
(2001) 761–776.
17. S. Pickenhain and V. Lykina, Sufficiency conditions for infinite horizon optimal control
problems, in Recent Advances in Optimization (Springer, 2006), pp. 217–232.
18. P. Protter, Stochastic Integration and Differential Equations (Springer, 1990).
19. J. Shi and Z. Wu, Maximum principle for forward–backward stochastic control system
with random jumps and applications to finance, J. Syst. Sci. Complex 23 (2010) 219–
231.
20. P. Shige and Y. Shi, Infinite horizon forward–backward stochastic differential equa-
tions, Stoch. Proc. Appl. 85 (2000) 75–92.
21. R. F. Stengel, Stochastic Optimal Control Theory and Application (John Wiley &
Sons, 1986).
22. H. Tang and Z. Wu, Stochastic differential equations and stochastic linear quadratic
optimal control problem with L´evy processes, J. Syst. Sci. Complex. 22 (2009) 122–
136.
23. M. N. Tang and Q. Zhang, Optimal variational principle for backward stochastic
control systems associated with L´evy processes, Sci. China. Math. 55 (2012) 745–
761.
24. Z. Wu, A general maximum principle for optimal control of forward–backward stochas-
tic systems, Automatica. 49 (2013) 1473–1480.
1750020-16
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.
2nd Reading
June 1, 2016 11:40 WSPC/S0219-4937 168-SD 1750020 17–17
Infinite horizon nonlinear optimal control of FBSDE
25. J. Yin, On solutions of a class of infinite horizon FBSDEs, Stat. Probab. Lett. 78
(2008) 2412–2419.
26. J. Yong, Forward–backward stochastic differential equations with mixed initialtermi-
nal conditions, Trans. Amer. Math. Soc. 362 (2010) 1047–1096.
27. J. Yong and X. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations
(Springer, 1999).
28. G. Zong, Anticipated backward stochastic differential equations driven by the Teugels
martingales, J. Math. Anal. Appl. 412 (2014) 989–997.
1750020-17
Stoch. Dyn. Downloaded from www.worldscientific.com
by WEIZMANN INSTITUTE OF SCIENCE on 06/17/16. For personal use only.