Conference PaperPDF Available

Modelling Financial Time Series

Authors:

Abstract and Figures

Financial time series, in general, exhibit average behaviour at “long” time scales and stochastic behaviour at ‘short” time scales. As in statistical physics, the two have to be modelled using different approaches – deterministic for trends and probabilistic for fluctuations about the trend. In this talk, we will describe a new wavelet based approach to separate the trend from the fluctuations in a time series. A deterministic (non-linear regression) model is then constructed for the trend using genetic algorithm. We thereby obtain an explicit analytic model to describe dynamics of the trend. Further the model is used to make predictions of the trend. We also study statistical and scaling properties of the fluctuations. The fluctuations have non-Gaussian probability distribution function and show multiscaling behaviour. Thus, our work results in a comprehensive model of trends and fluctuations of a financial time series.
Content may be subject to copyright.
Modelling Financial Time Series
P. Manimaran1, J.C. Parikh1, P.K. Panigrahi1S. Basu2,C.M.Kishtawal
2
and M.B. Porecha1
1Physical Research Laboratory, Ahmedabad 380 009, India. parikh@prl.res.in
2Space Applications Centre, Ahmedabad 380 015, India.
Summary. Financial time series, in general, exhibit average behaviour at “long”
time scales and stochastic behaviour at ‘short” time scales. As in statistical physics,
the two have to be modelled using different approaches – deterministic for trends
and probabilistic for fluctuations about the trend. In this talk, we will describe
a new wavelet based approach to separate the trend from the fluctuations in a
time series. A deterministic (non-linear regression) model is then constructed for
the trend using genetic algorithm. We thereby obtain an explicit analytic model to
describe dynamics of the trend. Further the model is used to make predictions of
the trend. We also study statistical and scaling properties of the fluctuations. The
fluctuations have non-Gaussian probability distribution function and show multi-
scaling behaviour. Thus, our work results in a comprehensive model of trends and
fluctuations of a financial time series.
1 Introduction
In this presentation, I shall report on the work we have done to study financial
time series.
If one observes time series of stock prices, exchange rate of currencies
or commodity prices one finds that they have some common characteristic
features. They exhibit smooth mean behaviour (cyclic trends) at long time
scales and fluctuations about the smooth trend at much shorter time scales.
As illustrative examples, values of S&P CNX Nifty index of NSE at 30 second
interval and daily interval are shown in Figs. 1 and 2 respectively.
Note also that the time series are not stationary.
In order to model [2] series having these features, we follow the standard
approach in statistical physics. More precisely, we consider mean (trend) to
have deterministic behaviour and fluctuations to have stochastic behaviour.
Accordingly, we model them by quite different methods – smooth trend by
deterministic dynamics and fluctuations by probabilistic methods.
In view of this, we first separate mean behaviour from fluctuations in the
data. Once this is done, we can decompose a time-series {X(i),i =1, ....N }
184 P. Manimaran et al
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 Day 8 Day 9 Day 10
880
900
920
940
960
980
1000
1020
Nifty High Frequency Data (Sampled every 30 seconds)
Index Value
Fig. 1. Nifty high frequency data sampled every 30 seconds for 10 days in January,
1999. (Dates: January 1,4,5,6,7,8,11,12,13,14)
Fig. 2. The daily closing price of S&P CNX Nifty index for the period November
3, 1995-April 11, 2000
as
X(i)=Xm(i)+Xf(i)(i=1,2, ....N )(1)
where Xm(i) is the mean component and Xf(i) the fluctuating component.
To carry out this separation we have proposed [3] and used discrete wavelet
transforms (DWT) [4] a method well suited for non-stationary data. Section 2
describes in brief some basic ideas of DWT and how it is used to determine the
smooth component {Xm(i)}. The fluctuating component {Xf(i)}is obtained
by subtracting {Xm(i)}from X(i) (see Eq.(1)). We have also compared [3]
our procedure with other methods [5] of extracting fluctuations, by studying
scaling properties of a computer generated non-stationary series.
Modelling Financial Time Series 185
In Section 3, we model the smooth series {Xm(i); i=1,2, ...N }in the
form of a general non-linear regression equation. The form of the “best” non-
linear function is determined by fitting the data and use of Genetic Algorithm
[6]. Out of sample predictions are made using the model.
Section 4 contains a discussion of statistical properties of the fluctuations
{Xf(i); i=1,2, ...N }. Finally, a summary of our work and some concluding
remarks are given in Section 5.
2 Discrete wavelets – separation of trend from
fluctuations
Wavelet transforms [4] decompose a signal S(t) into components that simulta-
neously provide information about its local time and frequency characteristics.
Therefore, they are very useful for studying non-stationary signals. We use
discrete wavelets in our work because one can obtain a complete orthonor-
mal basis set of strictly finite size. This gives significant mathematical and
computational advantages. In the present work, we have used Coiflet-2 (Cf-2)
wavelets [4].
In wavelets one starts from basis functions, father wavelet φ(t) (scaling
function) and mother wavelet ψ(t) [4]. These functions obey, φ(t)dt =Aand
ψ(t)dt =0,whereAis a constant. Scaling and translation of wavelets lead
to φk=φ(tk)andψj,k =2
j/2ψ2jtk, which satisfy the orthogonality
conditions: φk|φk=δkk,φk|ψjk=0andψj,k |ψj,k=δj,jδk,k.
Any signal belonging to L2(R) can be written in the form,
S(t)=
k=−∞
ckφk(t)+
k=−∞
j=0
dj,kψj,k (t)(2)
where c
ksarethelow-passcoecientsandd
j,ks are the high-pass coefficients.
We next describe the way we get the smooth trend series starting from the
data series.
A forward Cf-2 transformation is first applied to the time series
{Xi;i=1,2, ...N }.ThisgivesNwavelet coefficients. Of these N
2are low
pass coefficients that describe the average behaviour locally and the other half
N
2are high pass coefficients corresponding to local fluctuations. In order to
obtain the mean (smooth) series, we set the high pass coefficients to zero and
then an inverse Cf-2 transformation is carried out. This results in “level one”
time series of trend in which fluctuations at the smallest time scale have been
filtered out. One can repeat the entire process on “level one” series to filter
out fluctuations at the next higher time scale to get a “level two” trend series
and so on. Fig. 3 shows the actual {X(i)}and (Cf-2, level 4) trend series
{Xm(i)}of the NASDAQ composite index. Having determined {Xm(i)}the
fluctuation series {Xf(i)}is obtained using Eq. (1).
186 P. Manimaran et al
Fig. 3.
It is important to test the validity of our method. For this purpose, we have
studied [3] scaling behaviour of fluctuations of computer generated Binomial
Multi-Fractal (BMF) series [7].
The qth order fluctuation function Fq(s) [5] is obtained by squaring and
averaging fluctuations over Msforward and Msbackward segments:
Fq(s)1
2Ms
2Ms
b=1 F2(b, s)q/21/q
(3)
Here, ‘q’ is the order of moment that takes real values and sis the scale. The
above procedure is repeated for variable window sizes for different values of q
(except q= 0). The scaling behaviour is obtained by analyzing the fluctuation
function,
Fq(s)sh(q)(4)
in a logarithmic scale for each value of q. If the order q= 0, direct evaluation
of Eq. (3) leads to divergence of the scaling exponent. In that case, logarithmic
averaging has to be employed to find the fluctuation function:
Fq(s)exp 1
4Ms
2Ms
b=1
n F2(b, s)(5)
As is well-known, if the time series is mono-fractal, the h(q) values are inde-
pendent of q. For multi-fractal time series, h(q) values depend on q.
In Table 1, we compare exactly known analytical values of the scaling
exponent hex(q) of BMF with the ones obtained by our wavelet based method
(hw(q)) and those of the earlier [5] method (hp(q)) using local polynomial fit.
Note that the wavelet based approach gives excellent results.
Modelling Financial Time Series 187
Table 1. The h(q) values of binomial multi-fractal series (BMF) computed analyti-
cally (hex(q)) through MF-DFA (hp(q)) and wavelet (hw(q)) approach, Db-8 wavelet
has been used.
qh
ex(q)hp(q)hw(q)
-10 1.9000 1.9304 1.8991
-9 1.8889 1.9184 1.8879
-8 1.8750 1.9032 1.8740
-7 1.8572 1.8837 1.8560
-6 1.8337 1.8576 1.8319
-5 1.8012 1.8210 1.7981
-4 1.7544 1.7663 1.7473
-3 1.6842 1.6783 1.6641
-2 1.5760 1.5397 1.5218
-1 1.4150 1.3939 1.3828
0 0 1.2030 1.2163
1 1.0000 0.9934 1.0091
2 0.8390 0.8312 0.8453
3 0.7309 0.7234 0.7359
4 0.6606 0.6538 0.6649
5 0.6139 0.6075 0.6177
6 0.5814 0.5753 0.5848
7 0.5578 0.5519 0.5610
8 0.5400 0.5343 0.5430
9 0.5261 0.5205 0.5290
10 0.5150 0.5095 0.5178
We have also evaluated the scaling exponent h(q) for the NASDAQ data.
This is shown in Fig. 4 where h(q) is a non-linear function of qindicating that
the fluctuations have multi-fractal nature.
Fig. 4. Scaling exponent for the NASDAQ fluctuations
188 P. Manimaran et al
3 Model of the trend series
We now construct a deterministic model for the smooth series {Xm(i)}shown
in Fig. 3. For convenience of notation, we define
y(i)Xm(i)(i=1,2, ....N )(6)
Further, we assume that y(i) is a non-linear function of dprevious values of
the variable y.Moreprecisely,wehave
y(i)=F((y(i1),y(i2), ....y(id)) (i=d+1,d+2, ....N )(7)
where Fis as an unknown non-linear function and the number of lags dis
also not known. It is worth pointing out that, in the study of chaotic time
series [8], using the method of time delays to reconstruct the state space, d
is actually the embedding dimension of the attractor. We determine its value
from false nearest neighbour analysis [8]. For the smoothened NASDAQ series
of Fig. 3, the embedding dimension d= 4. Therefore,
y(i)=F(y(i1),y(i2)....y(i4)) (i=5, ....N )(8)
We now use genetic algorithm (GA) [6] to obtain the function Fthat best
represents the deterministic map of the trend series. Following the GA ap-
proach of ref. [6] we begin by constructing 200 randomly constructed strings.
These strings contain random sequences of the four basic arithmetic symbols
(+, –, ×,÷), the values of variable yat earlier times and real number con-
stants. This choice implies that in the GA framework, the search for function
Fwill be restricted to the form of ratio of polynomials – in a way similar
to Pad´e approximation. The equation strings were tested on a training set
consisting of first 700 data points of the trend series. A measure of fitness
was defined to evaluate performance of each string. These strings are then or-
dered in decreasing value of fitness and combined pairwise, beginning with the
fittest. We retain 50 such pairs which in turn reproduce so that each combined
string has 4 offsprings. The first two are identical to the two parents and the
remaining two are formed by interchanging parts. A small percentage of the
elements in strings are mutated. At this stage fitness is again computed and
the entire cycle of ordering, combing, reproducing and mutation of strings is
repeated. This is continued for 5000 iterations.
The map that results for our data after carrying out this procedure has
the form
y(i)= y(i4) (y(i1))2
(y(i3))3(i=5, ....700) (9)
The fitness was 0.9.
Clearly, if the map is a good representation of the dynamics, we ought to
have good out of sample predictions. These are shown in Fig. 5.
Modelling Financial Time Series 189
Fig. 5.
We get very promising results – the mean error is <0.1% and the sign
mismatches are in only about 5% of the predictions. Similar results were also
obtained for the BSE Sensex.
It is worth stressing that these are one time step ahead predictions. If
the map (Eq.(9)) is iteratively used to make dynamic predictions, then the
error grows very quickly. This suggests that the long terms dynamics is not
captured by the map.
4 Statistical properties of fluctuations
These properties have been extensively studied and reported in literature (e.g.,
see refs. [1]-[2]). All the same, for the sake of completeness, we show in the
figures below:
(i) Probability distribution function (PDF) of returns (Fig. 6)
(ii) Auto-correlation function of returns (Fig. 7)
(iii)Auto-correlation function of absolute value of returns (Fig. 8)
Fig. 6 shows that the PDF is not Gaussian – it has skewness = -0.07
and kurtosis = 8.27. Note that for a Gaussian PDF these reduced cumulants
are zero. Figs. 7 and 8 together show that the binary correlations go to zero
quickly but the “volatility” correlations persist for a long time. This means
that fluctuations are not independent.
As mentioned earlier, these fluctuation properties are well-known to occur
in financial markets [1,2].
190 P. Manimaran et al
Fig. 6. Probability distribution function (PDF) of returns
Fig. 7. Auto-correlation function of returns
Fig. 8. Auto-correlation function of absolute value of returns
Modelling Financial Time Series 191
5 Summary and future work
Observation of financial market data suggests that the dynamics of the market
may be viewed as a superposition of long term deterministic trend and short
scale random fluctuations.
We have used discrete wavelet transforms to separate the two parts and
developed suitable models.
The trend has been modelled in the form of a non-linear auto-regressive
map. An explicit analytic functional form for the map is found using GA. It
gives an excellent fit to the data set and makes very reliable out of sample
single step predictions. Multi-step predictions are not good.
Statistical and scaling properties of fluctuations were determined and as
expected were consistent with earlier findings [1,2].
Regarding further work in this direction, a major challenge is to apply
GA methodology to arrive at a model with capability of making long term
predictions of the trend series.
In conclusion, we have thus made a beginning towards simultaneously
modelling trend and fluctuations of a time series.
References
1. Mandelboot BB (1997) Fractals and Scaling in Finance, Springer-Verlag, New
York; Bouchaud JP, Potters M (2000) Theory of Financial Risk, Cambridge
University Press, Cambridge; Mantegna RN, Stanley HE (2000) An Introduc-
tion to Econophysics, Cambridge University Press, Cambridge
2. Parikh JC (2003) Stochastic Processes and Financial Markets, Narosa, New
Delhi
3. Manimaran P, Panigrahi PK, Parikh JC (2005) Phys. Rev. E72:046120-
4. Daubechies I (1992) Ten Lectures on Wavelets, SIAM, Philadelphia
5. Kantelhardt JW, Zschiegner SA, Koscielny-Bunde E, Havlin S, Bunde A, Stan-
ley HE (2002) Physica A 316:87
6. Szpiro GG (1997) Phys. Rev. E 55:2557
7. Feder J (1998) Fractals, Plenum Press, New York
8. Abarbanel H, Brown R, Sidorovich J, Tsimring L (1993) Rev. Mod. Phys.
65:1331.
... In this paper, we propose a novel deep learning framework for probabilistic interpolation of irregularly sampled time series. Irregularly sampled time series data occur in multiple scientific and industrial domains including finance [20], climate science [25] and healthcare [21,31]. In some domains including electronic health records and mobile health studies [5], there can be significant variation in inter-observation intervals through time. ...
... For this experiment, we compare HTVAE mTAN, the single task Gaussian process STGP, and the proposed HeTVAE model. We vary the number of observed points (3,10,20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE and HTVAE mTAN and visualize the distribution of the resulting mixture. ...
... The setting here is same as in Section 4. Figure 4 compares HTVAE mTAN, the single task Gaussian process STGP, the proposed HeTVAE model and an ablation of proposed model without intensity encoding HeTVAE -INT. We vary the number of observed points (3,10,20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE, HeTVAE -INT and HTVAE mTAN, and visualize the distribution of the resulting mixture. ...
Preprint
Irregularly sampled time series commonly occur in several domains where they present a significant challenge to standard deep learning models. In this paper, we propose a new deep learning framework for probabilistic interpolation of irregularly sampled time series that we call the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE includes a novel input layer to encode information about input observation sparsity, a temporal VAE architecture to propagate uncertainty due to input sparsity, and a heteroscedastic output layer to enable variable uncertainty in output interpolations. Our results show that the proposed architecture is better able to reflect variable uncertainty through time due to sparse and irregular sampling than a range of baseline and traditional models, as well as recently proposed deep latent variable models that use homoscedastic output layers.
... They tested the approach on four stock market indices and concluded that the approach is useful to improve the efficiency of trading system by making accurate predictions [18]. Discrete Wavelet Transform (DWT) is used in [33], [34] to distinguish the fluctuations from the underlying trend of NASDAQ composite index series. An analytical model is developed for the separated trend series using GA. ...
... An analytical model is developed for the separated trend series using GA. The developed model is then used to perform out-of-sample one-step ahead predictions using past values of the trend series [33], [34]. Ahalpara and Parikh [3] applied GP for generating analytical models for one deterministic chaotic (logistic map) and three real (Aditya Tokamak, NAS-DAQ composite index and S&P Nifty index) time series dataset. ...
... We followed the approach proposed by Manimaran [33] and used by others [3] for extracting trend for a selected scale using DWT. While applying Db-4 transform on finite length time series, border distortion problem was encountered. ...
Article
Full-text available
Financial time series prediction is considered as a challenging task. The task becomes difficult due to inherent nonlinear and non-stationary characteristics of financial time series. This article proposes a combination of wavelet and Postfix-GP, a postfix notation based genetic programming system, for financial time series prediction. The discrete wavelet transform approach is used to smoothen the time series by separating the fluctuations from the trend of the series. Postx-GP is then employed to evolve models for the smoothen series. The out-of-sample prediction capability of evolved solutions is tested on two stocks price and two stock indexes series. The results are compared with those obtained using ECJ, a Java based evolutionary framework. The nonparametric statistical tests are applied to evaluate the significance of the obtained results.
... This paper is particularly motivated by challenges that arise when modeling longitudinal data under extensive missingness, high heterogeneity, and general data scarcity. Different combinations of these challenges occur in many time series modeling application domains including astronomy [21], economics [8], finance [12], and meteorology [15], but these challenges often all occur simultaneously in the health data analysis setting [13]. ...
Preprint
In this paper we present BayesLDM, a system for Bayesian longitudinal data modeling consisting of a high-level modeling language with specific features for modeling complex multivariate time series data coupled with a compiler that can produce optimized probabilistic program code for performing inference in the specified model. BayesLDM supports modeling of Bayesian network models with a specific focus on the efficient, declarative specification of dynamic Bayesian Networks (DBNs). The BayesLDM compiler combines a model specification with inspection of available data and outputs code for performing Bayesian inference for unknown model parameters while simultaneously handling missing data. These capabilities have the potential to significantly accelerate iterative modeling workflows in domains that involve the analysis of complex longitudinal data by abstracting away the process of producing computationally efficient probabilistic inference code. We describe the BayesLDM system components, evaluate the efficiency of representation and inference optimizations and provide an illustrative example of the application of the system to analyzing heterogeneous and partially observed mobile health data.
... geology [Rehfeld et al., 2011], finance [Manimaran et al., 2006], economics [Kling and Bessler, 1985], meteorology [Mudelsee, 2002], and traffic analysis [Ye et al., 2010]. ...
Preprint
Irregularly sampled time series data arise naturally in many application domains including biology, ecology, climate science, astronomy, and health. Such data represent fundamental challenges to many classical models from machine learning and statistics due to the presence of non-uniform intervals between observations. However, there has been significant progress within the machine learning community over the last decade on developing specialized models and architectures for learning from irregularly sampled univariate and multivariate time series data. In this survey, we first describe several axes along which approaches differ including what data representations they are based on, what modeling primitives they leverage to deal with the fundamental problem of irregular sampling, and what inference tasks they are designed to perform. We then survey the recent literature organized primarily along the axis of modeling primitives. We describe approaches based on temporal discretization, interpolation, recurrence, attention, and structural invariance. We discuss similarities and differences between approaches and highlight primary strengths and weaknesses.
... For example, the value can be time-variable, and possibly auto-correlated. Equation (7), which states that the spread between long and short rates is stationary in distribution, has been tested by Campbell and Shiller (1987), Shiller (1989), Campbell and Shiller (1991), Choi and Mohar (1991), Hall et al. (1992), Hejazi et al. (2000), Longstaff (2000), Bekaert and Hodrick (2001), Thorton (2005Thorton ( , 2006, Konstantinou (2005), Sarno et al. (2007), Mills and Markellos (2008), Suardi (2010), and Finlay and Jones (2011), with mixed results. Most papers find that the highly reputable expectations theory does not deserve its renown. ...
Article
The expectations theory posits that the long interest rate is an average of expected short term interest rates with the possibility of the existence of a risk premium. This paper looks upon fourteen samples of investments for which the difference in maturity is three months. All yields are actual yields and are adjusted to have the same maturities as the short rate. The evidence is strong for the pure expectations theory which predicts that the risk premiums are zero. This should not be surprising because the premium that we are looking for is merely 4 basis points per quarter. The contribution of this paper, besides giving support to the pure expectations theory, is to lay out the fundamental and basic methodology that one should follow in order to study other investments similar to ours. Both unconditional and conditional tests are performed. Because of sampling error and small-sample bias the unconditional tests may be preferable.
Article
This paper examines the short-term dynamics, macroeconomic sensitivities, and longer-term trends in the variances and covariances of national equity market index daily returns for eleven countries in the Euro currency zone. We modify Colacito, Engle and Ghysel?s Mixed Data Sampling Dynamic Conditional Correlation Garch model to include a new scalar measure for the degree of correlatedness in time-varying correlation matrices. We also explore the robustness of the fi?ndings with a less model-dependent realized covariance estimator. We fi?nd a secular trend toward higher correlation during our sample period, and signi?cant linkages between macroeconomic and market-wide variables and dynamic correlation. One notable fi?nding is that average correlation between these markets is lower when their average GDP growth rate is lower or when more of them have negative GDP growth.
Chapter
We analyze the variations in S&P CNX NSE daily closing index stock values through discrete wavelets. Transients and random high frequency components are effectively isolated from the time series. Subsequently, small scale variations as captured by Daubechies level 3 and 4 wavelet coefficients and modelled by genetic programming. We have smoothened the variations using Spline interpolation method, after which it is found that genetic programming captures the dynamical variations quite well through Padē type of map equations. The low-pass coefficients representing the smooth part of the data has also been modelled. We further study the nature of the temporal variations in the returns.
Conference Paper
Though forecasting methods are used in numerous fields, we have seen no work on providing a general theoretical framework to build forecast operators into temporal databases. In this paper, we first develop a formal definition of a forecast operator as a function that satisfies a suite of forecast axioms. Based on this definition, we propose three families of forecast operators called deterministic, probabilistic, and possible worlds forecast operators. Additional properties of coherence, monotonicity, and fact preservation are identified that these operators may satisfy (but are not required to). We show how deterministic forecast operators can always be encoded as probabilistic forecast operators, and how both deterministic and probabilistic forecast operators can be expressed as possible worlds forecast operators. Issues related to the complexity of these operators are studied, showing the relative computational tradeoffs of these types of forecast operators. Finally, we explore the integration of forecast operators with standard relational operators in temporal databases and propose several policies for answering forecast queries.
Article
The question of which multivariate generalized autoregressive conditionally heteroskedastic (GARCH) models in the vec form are representable in the BEKK form is addressed. Using results from linear algebra, it is established that all vec models not representable in the simplest BEKK form contain matrices as parameters that map the vectorized positive semidefinite matrices into a strict subset of themselves. Moreover, a general result from linear algebra is presented implying that in dimension two the models are equivalent, and in dimension three a simple analytically tractable example for a vec model having no BEKK representation is given.
Book
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Fractals and Scaling in Finance Theory of Financial Risk An Introduction to Econophysics
  • Bb Mandelboot
  • Rn Mantegna
  • He Stanley
Mandelboot BB (1997) Fractals and Scaling in Finance, Springer-Verlag, New York; Bouchaud JP, Potters M (2000) Theory of Financial Risk, Cambridge University Press, Cambridge; Mantegna RN, Stanley HE (2000) An Introduction to Econophysics, Cambridge University Press, Cambridge
Stochastic Processes and Financial Markets
  • Jc Parikh
Parikh JC (2003) Stochastic Processes and Financial Markets, Narosa, New Delhi
  • J W Kantelhardt
  • S A Zschiegner
  • E Koscielny-Bunde
  • S Havlin
  • A Bunde
  • H E Stanley
Kantelhardt JW, Zschiegner SA, Koscielny-Bunde E, Havlin S, Bunde A, Stanley HE (2002) Physica A 316:87
  • J Feder
Feder J (1998) Fractals, Plenum Press, New York
  • P Manimaran
  • Pk Panigrahi
  • Jc Parikh
Manimaran P, Panigrahi PK, Parikh JC (2005) Phys. Rev.
  • G G Szpiro
Szpiro GG (1997) Phys. Rev. E 55:2557
  • P Manimaran
  • P K Panigrahi
  • J C Parikh
Manimaran P, Panigrahi PK, Parikh JC (2005) Phys. Rev. E72:046120-