Content uploaded by Iman Amini
Author content
All content in this area was uploaded by Iman Amini on Sep 22, 2015
Content may be subject to copyright.
A BFSK Neural Network Demodulator With Fast
Training Hints
Mohammad Reza Amini
Islamic Azad University, Boroujerd branch
Mohammad Moghadasi Iman Fatehi
Islamic Azad University, Boroujerd branch
Islamic Azad University, Boroujerd branch
Abstract—In this paper an artificial neural network demodulator
to demodulate binary frequency shift keying signal is proposed.
This demodulator has some important features compared with
conventional method such as coherent and non-coherent
demodulator and also other proposed neural network
demodulators. In contrast with conventional demodulator, this
demodulator (which uses a tapped delayed line in its two layers)
does not need any band pass filter (to select the desired frequency
band), any pulse shaping filter (to worry about its output
sharpness) and any synchronous local oscillator and the other
usual demodulator components. it is just a neural network
implementation demodulator, that should be called soft
demodulator, because once it is trained properly for a special
kind of modulation, it works well for that kind of modulation and
it is easy to train it for another modulation scheme without
changing hardware , i.e., train it and then use it ! Compared with
the other ANN demodulators proposed before it can be trained
faster (or with less training data bits), it has more efficient BER
curve and also has a better performance (MSE or SSE).
Keywords-demodulation; Neural network; communication; FSK;
BER;Matlab simulation
I.
INTRODUCTION
Frequency shift keying is one of the common digital
modulation schemes that in which data bits are mapped into
special frequencies. There are two common demodulators:
coherent and non-coherent. General purpose receiver has wide
application prospect if the same demodulation system can
demodulate different modulation signals. To demodulate a
special symbol with ANN
1
, enough samples of that symbol in a
special period of time are required to avoid error. It means that
a kind of memory should be modeled in ANN to consider
previous samples of a symbol. This memory could be
implemented as a tapped delayed line (widely used in
equalizers). In such a case of adaptive training neural network
for demodulating different schemes of modulation (widely
used in software radio), training time would be more
important.
The features of the FSK signal and the conventional method of
FSK demodulation are discussed in section II. A neural
network with memory at the input of each layer (TDNN)
1
Artificial Neural Network
compared with ELMAN network is shown in section III.
Designing and learning algorithm of ANN demodulator are
discussed in section IV. Simulation and performance are
shown in section V and at last, some conclusions are given in
the section VI.
II. BFSK DEMODULATION
Normally, FSK signal is expressed by
))(cos()(
³
WWZZ
dDtAtz
t
dc
(1)
where
)(
W
D is a random binary pulse sequence with
amplitude of +1 or -1 for binary bits 1 and 0 respectively via
each data bit is mapped into a special frequency related to its
value and Pulse shaping filter might be used in modulator.
Usually, the demodulation method for FSK signal can be
divided into two types: coherent demodulation (matched filter)
and non-coherent demodulation. They are illustrated in Fig. 1.
A coherent demodulator consists of two parallel paths that each
of them computes the correlation of the received signal with
symbols set produced and by the transmitter. Non-coherent
demodulator is also consists of two paths but each of them is
an envelope detector with a band pass filter tuned to transmitter
frequencies.
III. A
RTIFICIAL NEURAL NETWORK WITH MEMORY
(ELMAN AND TDNN NETWORK)
Elman network commonly is a two-layer network with
feedback from the first-layer output to the first-layer input.
This recurrent connection allows the Elman network to both
detect and generate time-varying patterns. A two-layer Elman
network is shown in Fig. 2 [1,2,3].The Elman network has
tansig neurons in its hidden (recurrent) layer, and purelin
neurons in its output layer. This combination is special so that
two-layer networks with these transfer functions can
approximate any function (with a finite number of
discontinuities) with arbitrary accuracy. The only requirement
is that the hidden layer must have enough neurons.
2010 Second International Conference on Communication Software and Networks
978-0-7695-3961-4/10 $26.00 © 2010 IEEE
DOI 10.1109/ICCSN.2010.123
578
Downloaded from http://www.elearnica.ir
³
W
d
³
W
d
))((
0
tCOS
d
Z
Y
))((
0
tCOS
d
Z
Y
Coherent
Non-coherent
Fig. 1. Coherent and non-coherent FSK demodulator.
More hidden neurons are needed as the function being fitted
increases in complexity. Elman networks are not as reliable as
some other kinds of networks, because both training and
adaptation use an approximation of the error gradient. For an
Elman to have the best chance at learning a problem, it needs
more hidden neurons in its hidden layer than are actually
required for a solution by another method. While a solution
might be available with fewer neurons, the Elman network is
less able to find the most appropriate weights for hidden
neurons because the error gradient is approximated. Therefore,
having a fair number of neurons to begin with makes it more
likely that the hidden neurons will start out dividing up the
Fig. 2. Elman Network Architecture
input space in useful ways, so in real time applications and
adaptive trainings it is not suitable to use it, at least because of
training time and memory limitations and also training
algorithm convergence (hardware resources) as indicated in
[1,2]. Therefore at the receiver a fast ANN demodulator that
can store enough previous samples of each symbol is needed.
A simple way to store data samples and use them to
demodulate a symbol is to apply a tapped-delay line or time-
delayed line (TDL) at the beginning of each layer. This is
called the Distributed Time-Delay Neural Network (TDNN)
[4]. The original architecture was very specialized for that
particular problem. Fig. 3 shows a general two-layer
distributed TDNN. This network is well suited to time-series
prediction in which
i
f is the activation (transfer) function of
layer i, IW and LW are the input weights and layer weights
vector respectively and bi is the i
th
layer bias vector[2,3,5].
IV.
DESIGNING AND TRAINING
A. Designing
The SIMULINK toolbox of MATLAB is used to build the
corresponding models of ANN demodulators, whose transfer
function of the hidden layer is tansig and transfer function of
the output layer is purelin with 8 neurons in the hidden layer. A
back propagation algorithm is presented to train the ANN
demodulator. For a TDNN network to simulate, time-delay
lines with 4 and 2 elements are considered in hidden and
output layer respectively. Table 1 and 2 show the structural
specifications of the two mentioned ANN (ELMAN and
TDNN) in MATLAB simulation.
Fig. 3. TDNN Network Architecture
TABLE I. ELMAN network parameters.
579
ELMAN_NET =
Neural Network object:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [1 0; 1 0]
outputConnect: [0 1]
Delay:
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 1 (read-only)
functions:
adaptFcn: 'trains'
divideFcn: 'dividerand'
gradientFcn: 'calcjxfp'
initFcn: 'initlay'
performFcn: 'mse'
plotFcns: {'plotperform','plottrainstate'}
trainFcn: 'traingdx'
T
ABLE II. TDNN network parameters.
TDNN NET =
Neural Network object:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
Delay:
numOutputs: 1 (read-only)
numInputDelays: 4 (read-only)
numLayerDelays: 2 (read-only)
functions:
adaptFcn: 'trains'
divideFcn: 'dividerand'
gradientFcn: 'calcjxfp'
initFcn: 'initlay'
performFcn: 'mse'
plotFcns: {'plotperform','plottrainstate'}
trainFcn: 'trainlm'
B. Fast ANN Training Hints
Training an ANN with memory such as ELMAN and
TDNN to get a desired performance (MSE or SSE) is time
consuming process because those networks use a feedback
structure to update network weights as it is simulated and
shown in [2,4,5]. If it is possible to use a small number of
training data in training process with a simple fixed pattern and
trust the network to work properly, training a dynamic network
would definitely take a less time to get a better performance.
Data bit sequences used to train this demodulator, are small
length noiseless successive 0 and 1 respectively. Since training
data pattern is simple and has a fixed routine the neural
network can be trained faster and with a small number of data
bits and also because those bits repeat 0 and 1 successively
neural demodulator will be trained for the worst condition of
incoming data bits (successive 0 and 1) to reach a sharp output,
but such a trained demodulator may have some problems
demodulating data bits that is not happen in mentioned pattern
(for example 00000 or 1111). In this case, the output of
demodulator may have some spikes at the beginning of symbol
period detection.
C. Removing spikes effect
The best way to resolve the effect of mentioned spikes and
noise in order to detect data bits with less error, is to use
integrate and dump filter (I&D shown in Fig. 4) the
demodulator output at each sample period and then compare it
with a suitable threshold to decide about the type of bit (0 or
1).
V. SIMULATION
AND RESULTS
Basic parameters of the training signals in ANN
demodulator are:
Signal’s modulation mode: 2FSK
Sampling frequency: 20000Hz
Carrier wave frequency: 3000Hz, 6000Hz
Bit duration: 1mse
Baud rate: 1000bps
The number of training data: 20
Fig. 5 shows Training signal.
Fig. 4. Adding I&D to demodulator
Fig. 6 and 7 show the learning curves of the two demodulators
(ELMAN and TDNN) with both random data bits and
successive 0, 1 data bits. As it is expected both ELMAN and
TDNN with Successive 0, 1 training data bits can reach the
desired performance much faster than random training, for
example TDNN with Successive 0, 1 training data bits reach
the MSE of 0.0050 in 13 epochs but with random training data
bits it can only reach the MSE of 4.554 in 289 epochs, while
the demodulator proposed in [2,3, 6] needs much more time (or
equivalently epochs) to train properly.
580
Fig. 5. Training Binary data (Top). BFSK signal (Bottom)
(a)
(b)
Fig. 6. Learning curve of ELMAN ANN. (a). Successive 0, 1 data bits. (b).
Random data bits.
(a)
(b)
Fig. 7. Learning curve of TDNN ANN (a). Successive 0, 1 data bits. (b).
Random data bits.
Fig. 8 shows two signals, one is output signal that processed
by ANN demodulator (it is the same for both ELMAN and
TDNN) and the other one is the original testing data (desired
output). See the spikes in the output.
The final Fig. is BER curve of ANN demodulator for AWGN
2
channel compared with the ones for conventional
demodulators. These curves are obtained in the same
simulation conditions for all demodulators (It is the same for
both ELMAN and TDNN). It can be easily Figured out that
proposed ANN demodulators is more efficient than non-
coherent one to detect data and acts so closely to coherent
demodulator but with no filtering and no synchronized
oscillator and so on.
2
Additive white Gaussian Noise
581
Fig. 8. Output of ANN demodulator (red) compared with
desired output (blue).
Fig. 9. BER curve in AWGN channel
VI.
CONCLUSION
In this paper, the fast trained ANN demodulator for the
FSK signal has been proposed and its efficiency was shown.
The simulation results of the demodulation by ANN
demodulator show it can effectively demodulate the FSK
signal in an AWGN channel. Training time is more important
to many systems such as software and military radios. It can be
trained so faster than the previous proposed ANN
demodulators and also can demodulate signal efficiently from
BER point of view in AWGN channel. Also its output is very
sharp and it does not need any band pass filter and
synchronized oscillator and so on. The ANN demodulator is
used to demodulate FSK signal in the paper. Actually, ANN
demodulator can be general purpose system for modulated
signal. It can demodulate different modulation signals like
ASK signal, PSK signal, etc; doesn’t need to change the basic
structure but change the bias and weights of ANN. If the ANN
demodulator is implemented by digital logic circuit, the
difference between the FSK demodulator and the ASK
demodulator is only transferring two different groups of
parameters from digital memory to ANN demodulator system.
So the characteristics of ANN demodulator contains wonderful
and infinitely possibilities in the field of modern
communication.
REFERENCES
[1] “DARPA Neural Network Study”, Lexington, MA: M.I.T. Lincoln
Laboratory, 1988.
[2] Min Li HongSheng Zhong ; Min Li. “Neural Network Demodulator for
Frequency Shift Keying,” 2008 International Conference on Computer
Science and Software Engineering, 978-0-7695-3336-0/08 $25.00 ©
2008 IEEE,(DOI 10.1109/CSSE.2008.1440)
[3] Nakayama, K.; Imai, K.,“A neural demodulator for amplitude shift
keying signals,” Neural Networks, 1994, IEEE International Conference
on Volume 6, 27 June-2 July 1994
[4] Ohnishi, K.; Nakayama, K.,“A neural demodulator for quadrature
amplitude modulation signals,” Neural Networks, 1996., IEEE
[5] Chesmore, E.D., “Neural network architectures for signal detection and
demodulation,” Radio Receivers and Associated Systems, 1989., Fifth
International Conference on 23-27 Jul 1990 Page(s):1-4
[6] Weng, J.F.; Leung, S.H.; Lau, W.H.; Bi, G.G., “A new neural network
based multiuser detector in impulse noise,” Communications, 1996. ICC
96, Conference Record, Converging Technologies for Tomorrow's
Applications. 1996 IEEE International Conference on Volume 1, 23-27
June 1996 Page(s):541 - 545 vol.1
582